I am making a mini map Using render texture to render my Over head cam to my screen and Drawing that texture using NGUI 3.9 every things is fine Apart from When i am Trying to click on My minimap and trying to destroy a object or Geting the point on my Game Floor which is on my World Space rendered by Main Camera Raycast is not reaching it some how i don't know what i am doing wrong
Edited:
I am using Ngui is because whole Gui of my game is in Ngui so its my requierment
I have Edited my code to a new aproch by this i am able to cast a ray on to my floor but the position of my ray is not accurate
here is my code
public void TestWithTextureCoords()
{
Camera main = UICamera.mainCamera;
Camera mapCam = transform.GetComponent<Camera>();
Ray firstRay = main.ScreenPointToRay(Input.mousePosition);
Debug.DrawLine(firstRay.origin, firstRay.direction);
RaycastHit textHit;
if (Physics.Raycast(firstRay, out textHit, main.farClipPlane))
{
var hitPoint = textHit.point;
Ray scondRay = mapCam.ViewportPointToRay(hitPoint);
RaycastHit worldHit;
if (Physics.Raycast(scondRay, out worldHit, mapCam.farClipPlane))
{
Debug.DrawLine(scondRay.origin, worldHit.point, Color.red);
Debug.Log(" world hit point " + worldHit.point);
Debug.Log(worldHit.transform.name, worldHit.transform);
}
}
}
Please look into it As soon as possible let me know a work around or how it is being done May be I am missing a small Think I have no idea about I have Tried different kind of approaches but all in vain help!
Related
I am building a 3d topdown shooter. The player controls the avatar with the keyboard and the reticule with the mouse.
I found a simple way to implement the reticule based on this article:
https://gamedevbeginner.com/how-to-convert-the-mouse-position-to-world-space-in-unity-2d-3d/
I defined an object which represents the reticule and attached this script:
public class Reticule : MonoBehaviour
{
Camera mainCamera;
Plane plane;
float distance;
Ray ray;
// Start is called before the first frame update
void Start()
{
mainCamera = Camera.main;
plane = new Plane(Vector3.up, 0);
// This would be turned off in the game. I set to on here, to allow me to see the cursor
Cursor.visible = true;
}
// Update is called once per frame
void Update()
{
ray = mainCamera.ScreenPointToRay(Input.mousePosition);
if (plane.Raycast(ray, out distance))
{
transform.position = ray.GetPoint(distance);
}
}
}
This works, but the issue is that the reticule is lagging behind the mouse cursor, and catches up when I stop moving the mouse.
Is this because this method is just slow. Is there another simple way to achieve the same result?
the issue is that the reticule is lagging behind the mouse cursor, and catches up when I stop moving the mouse
That's normal, your mouse cursor is drawn by the graphics driver (with the help of WDM) as fast as the mouse data information comes over the wire, while your game only renders at a fixed framerate (or slower). Your hardware mouse will always be ahead of where your game draws it or anything related to it.
Some things that can help with working around this:
Don't show the system cursor, instead show your own. That way the cursor you show will always be in the same place your game thinks it is (since it drew it) and you won't have this issue. Careful with this however, because if your game's frame rate starts dipping it will be VERY noticeable when your cursor movement isn't smooth anymore.
Don't tie objects to your cursor. The issue doesn't show with normal interactions, like clicking buttons. You will notice this in RTS games when drawing boxes around units, but I struggle to think of another example of this.
Like above, but less restrictive, you could lerp the objects tied to your cursor in place, so they're always and intentionally behind it. It makes it look more gamey, which isn't that bad for a, you know, game. Though I wouldn't do this for a twitchy shooter, of course.
Here is an image of what I'm trying to achieve
As you can tell it's a split screen game, the player is on the left, computer on the right.
there are 3 cameras in the game, main and two player cameras
the player camera MUST be independent of the player and CAN NOT be a child object of the player object, because the ball bounces and rotates while moving, the cameras must not.
when the balls change direction the camera must remain behind the player so the visual appears to show the landscape rotating with the player.
I've searched high and low for anything to put me on the right path but nothing seems to work right.
It should be a smooth transition so lerp and slerp are to slow for instant moving. I know LateUpdate will help with this.
If anyone can point me in the right direction I'd appreciate it.
Many thanks,
Paul
Have a script which takes in an object's position, in this case the player's ball, so you can code the camera as if it was a child of the object.
An simple example code for having a following camera would be something like...
FollowObject.cs
public Transform exampleObject;
private int offset = 5; //How far back the camera will be
void LateUpdate()
{
transform.position = new Vector3(exampleObject.transform.position.x,
exampleObject.transform.position.y,
exampleObject.transform.position.z - offset)
}
I am trying to learn unity, and made my first own game and stucked at the beginning. The first idea was to drop a box (cube) to the mouse position. There are many videos and posts about getting the mouse position, and i tried to use them. My problem is, the mouse position i got is the camera's position, instead of the plane.
As you can see, it is kinda works, but it isn't fall to the plane.
https://prnt.sc/lmrmcl
My code:
void Update()
{
Wall();
}
void Wall()
{
if (Input.GetMouseButtonDown(0))
{
if (Input.GetMouseButtonDown(0))
{
wall = GameObject.CreatePrimitive(PrimitiveType.Cube);
Rigidbody wallsRigidbody = wall.AddComponent<Rigidbody>();
wall.transform.localScale = new Vector3(0.6f, 0.6f, 0.6f);
wallsRigidbody.mass = 1f;
wallsRigidbody.angularDrag = 0.05f;
wallsRigidbody.useGravity = true;
wallsRigidbody.constraints = RigidbodyConstraints.FreezeRotation;
wall.transform.position = Camera.main.ScreenToWorldPoint(Input.mousePosition);
}
Debug.Log(Camera.main.ScreenToWorldPoint(Input.mousePosition));
BoxCollider wallsCollider = wall.AddComponent<BoxCollider>();
wallsCollider.size = new Vector3(1f, 1f, 1f);
}
}
How should i change my code to get the right position?
This isn't a direct answer to your question, but I'm hoping it'll still get you where you need to go.
Prefabs are your friends! I'd highly recommend leveraging them here instead of constructing a cube directly in code.
But first, make sure everything else is set up right. Go ahead and construct a cube by hand in the Editor and make sure that when you hit Play, it falls as you expect. It should, provided it has a Rigidbody, collider, and you have gravity enabled (true by default).
If that works, drag that cube from your Hierarchy view into a folder in the Project view. This creates a prefab. You can now delete the cube from the Hierarchy view.
Update your script to have a public GameObject field, e.g.
public GameObject cubeToCreate;
Then, in the Inspector pane for whatever gameobject has that script attached, you should get a new field, "Cube To Create". Drag your prefab cube into that slot.
Lastly...update your code to wall = Instantiate(cubeToCreate). You'll still need to update the position, but you should be able to drop the rest of the initialization logic you have above, e.g. setting mass and drag.
As for the actual problem, the first thing that concerns me is how do you plan on turning a 2d mouse click into a 3d point? For the axis going "into" the screen...how should the game determine the value for that?
Camera.main.ScreenToWorldPoint accepts a Vector3, but you're passing it a Vector2 (Input.mousePosition, which gets converted to a z=0 Vector3), so the point is 0 units from the camera -- so in a plane that intersects with the camera.
I haven't done this, but I think you'll need to do raycasting of some sort. Maybe create an invisible 2d plane with a collider on it, and cast a physics ray from the camera, and wherever it hits that plane, that's the point where you want to create your cube. This post has a couple hints, though it's geared toward 2D. That might all be overkill, too -- if you create a new Vector3 and initialize it with your mouse position, you can set the z coordinate to whatever you want, but then your cube creation will be in terms of distance from the camera, which is not the best idea.
Hope this helps.
I created a Raycast that shoots from an object(not main camera) and I want it to hit UI elements. Example below.
Example
UI outlined in orange, big grey plane is a test object, red head is placed upon user's head and green is DrawRay of my raycast attached to an eye. I am trying to make it so user looks at a UI button and raycast can hit that button.
I've been playing around with Graphic raycast but nothing seems to work. Grey plane is still being hit by the raycast. I tried playing with OnPointerEnter but the thing is that my eye raycast is not a mouse/finger touch and is a second pointer.
Any ideas how to make it work with either isPointerOverGameObject or Graphic Raycaster or any other method?? Or how to create a second pointer that will be the eye raycast?
Current code below.
private void FixedUpdate()
{
GraphicRaycaster gr = this.GetComponent<GraphicRaycaster>();
PointerEventData pointerData = new PointerEventData(EventSystem.current);
pointerData.position = Input.mousePosition;
List<RaycastResult> results = new List<RaycastResult>();
gr.Raycast(pointerData, results);
//EventSystem.current.RaycastAll(pointerData, results);
if (results.Count > 0)
{
if (results[0].gameObject.tag == "Test tag")
{
Debug.Log("test");
}
}
//----- Failed Graphic raycast experiments above; working RaycastHit below --------
RaycastHit hit;
Debug.DrawRay(transform.position, transform.forward * -10000f, Color.green);
if (Physics.Raycast(transform.position, transform.forward * -1, out hit))
{
Debug.Log(hit.transform.name);
}
}
This is actually much more complicated if you want to do proper interaction with the canvas elements.
I suggest you do not try to roll out the solution yourself, but look at some already made solutions. Most VR toolkits have their own implementations, you could look into VRTK's example scene 007 - Interactions. You don't have to use VR to use it - just use the fallback SDK.
For more details on how this works, I might point you to the article from Oculus: Unity Sample Framework (a bit out of date).
I'm essentially trying to use my own code to accomplish what the BasicCursor and its corresponding script, Cursor.cs, does for gaze following using the Microsoft Toolkit. I believe its UpdateCursorTransform() method is what I'm trying to emulate, but I'm confused.
At the moment right now I have the cursor following the users gaze but it appears to be off center. The cursor is lower and to the left of where the users actual gaze is. What gives?
Here is my code..
// Do a raycast into the world based on the user's
// head position and orientation.
var headPosition = Camera.main.transform.position;
var gazeDirection = Camera.main.transform.forward;
RaycastHit hitInfo;
Ray ray;
Camera c = Camera.main;
ray = c.ScreenPointToRay(headPosition);
if (Physics.Raycast(headPosition, gazeDirection, out hitInfo))
{
// If the raycast hit a hologram...
objHit = hitInfo.transform;
// Move the cursor to the point where the raycast hit.
this.transform.position = hitInfo.point;
// Rotate the cursor to hug the surface of the hologram.
this.transform.rotation = Quaternion.FromToRotation(Vector3.up, hitInfo.normal);
}
Unity units are supposed to be 1 meter.
Mind, if your hololens camera's field of view is very wide or very narrow, that'll artificially influence what you see (things will seem closer/farther). In a project I was working on, my coworker had put the objects that were to be the holograms 500 units away from the camera, then set the FOV to 10, which made them not sit in space where the real-world walls and floor were. If you tried to walk around the object, you couldn't.
Set the field of view to 60 degrees for the best experience, I believe the prefab hololens camera has a field of view of 51.
I am not sure why trying to convert from meters to inches is giving you a value that's off by a power of 10 (should be a multiplier of ~39.37)