I'm working with OpenGL (OpenTK library) for a while and now I'd like to select an object using mouse (which I will handle myself I think) and allow its movement along with mouse movement.
I'm looking for an algorithm that could help me translate 2D mouse coordinates to my XZ coordinates in 3D (my Y coordinate is constant and corresponds to my "ground" level unless there's another object at this place, it should place moved object at the top of it - but that I'll also handle myself using collision detection).
What I'm asking here is an algorithm to allow my object follow mouse cursor in relation to camera and projection matrix.
UPDATE
I'm working with OpenGL 3.3+
Related
Hi2,
This seems to be an easy problem to fix.,
but i still cannot figure it on my own.
I need to be able to find the position of my mouse click at the 3D plane at world space which corresponding to plane's scale.
For example:
if i click the point a in my game view., it should return me 0,0f
The MainCamera however, can do move around.
and then if click point C it should return me a Vector2 which (i believe) should correspond with the scaling of the 3d object
and clicking inside the 3D plane, should once again return me a Vector2 according to the Object's scaling.
I know that i can use Input.mousePosition but it just return me the mouse position without any relation to the plane object itself.
There is " Camera.ScreenToWorldPoint" that converts to in game position.
here:
https://docs.unity3d.com/2018.4/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html
Hey team I'm currently working on a 3rd person game where I would like to fire grappling hooks from the player to a point that is offset from the center of the camera.
I have a screen overlay canvas with Ui images for crosshairs. When the left shift is held down the crosshairs move outward from center along the x axis and return to center once shift is released, a bit like crosshair spread in games except I want to trigger the spread via the shift key. These crosshairs are meant to dictate the location the anchors of the grappling hook land, originating from the player and hitting whatever object is directly forward of the crosshairs. (imagine attack on titan ODM gear if you've seen it). I am looking for a way to ray cast from the player to the point forward of these crosshairs while they're offset from the center.
So far I have the grappling system set up and working but am having trouble with the direction parameter when I use the crosshair spread. It separates fine but where the hooks land in relation to the cross hairs are obviously out as I'm trying to use angle calculations at the moment instead of what is forward of these reticles.
I'm basically wondering if it is possible to use these screen overlay UI images to cast forward from or if there's a better way to accomplish the same thing. I have my doubts because they're screen overlay so I imagine their coordinates wont be attached to the camera as they appear.
Thanks in advance for your help.
What I would do is determine the location of the reticleson the screen, then (as Aybe suggested) use ScreenPointToRay or ViewportToRay (depending on if it's easier to get a pixel position or a fractional position of each reticle) to Physics.Raycast that ray from the camera into the scene to find where the rays collide. At this point, you have two world positions the player seems to want to shoot the hooks.:
Vector3 hookTarget1;
Vector3 hookTarget2;
So, now you actually have to fire the hooks - but as you know the hooks aren't being shot from the camera, they are being shot from the player, and they maybe offset by a bit. Let's call the originating points (which may have the same value):
Vector3 hookOrigin1;
Vector3 hookOrigin2;
So, then you can create Rays that originate from the hook origins and point at the targets:
Ray hookShot1 = new Ray(hookOrigin1, hookTarget1 - hookOrigin1);
Ray hookShot2 = new Ray(hookOrigin2, hookTarget2 - hookOrigin2);
Using these rays, you can do another Physics.Raycast if you would like, to confirm that there aren't any trees or other obstacles that are between the player and the target location - and if there are, that may be where the anchor should actually sink:
Vector3 anchorPoint1;
Vector3 anchorPoint2;
The segment between the origin of these rays and these anchor points would be appropriate for rendering the cable, calculating physics for how to move the player, as well as checking for collisions with world geometry that might cause the anchor to release.
I am trying to create an interactive boardgame in Unity, using hand gestures to activate mechanics. The game will be projected on a table with a camera attached next to it, which will be used to capture motion.
The game consists of two grids, one 30x30 hidden tile grid, with a map grid on top of it.
My question is then: What is the smartest way to go around synchronizing the camera with the hidden tile grid, so I can see if a detected BLOB from my EmguCV is on top of set tile?
I've considered:
Creating another tilemap for the camera, and check if the tile number
in both tilemaps are equal to eachother.
Check if tile center position is close to blob center position, using
Unitys method Vector2.Distance
I'm currently making an android game in unity3d. I want to make it so when you touch the screen the cube looks at the touch location and moves toward it. I tried doing a lookat script to the touch position but the rotation is weird and it doesn't move toward the touch.
You can use Unity's built in Navigation System to make your object move from one point to the other, use a Ray to get the point that the player clicked on the screen, and use Transform.LookAt() to make your player look at that point.
Navigation
Raycasting
Transform.LookAt
Expression blend enables you to import 3d models. I want to animate a 3d object with code. I just can't seem to figure out what are the property values that I have to modify in order to make an object rotate. Let me show you what I mean:
so if I want to rotate this object I could use the camera orbit tool and If I use it I can end up with something like:
I know I can create a storyboard and create the animation by modifying the object. I need to rotate the object along the x axis with a slider. If I modify just one value it will rotate in a weird way I actually have to change several properties if I wish to do so. For example when I am rotating the object along the x-axis with the camera orbit tool I can see that all these properties are changing. I need to figure out what is the algorithm being used to rotate the object.
The math to move the camera position around so that you appear to be rotating around the X axis is just the parametric equation of a circle:
where t is the angle from zero to 2 pi.
Imagine you are standing on the street looking at a house. The camera's coordinates have to follow a circle around the house and the latitude and longitude are continuously changing to keep the same distance from the house. So there is no one value you can change to make it rotate.
Once you know the camera position, the direction is just the difference between the origin and the camera position.
All this is not hard to calculate but there is an easier way. Instead, keep the camera fixed and rotate the object. This makes animations much easier. Here is an MSDN article contains examples of that approach, including animations:
3-D Transformations Overview
That article is meant for WPF and Visual Studio but you can easily adapt the same ideas to Expression Blend.