I am doing a 2D (from above) game in XNA, but this is more of a math problem.
I am trying to rotate the sprite in the same direction as the direction I push the stick on the Xbox controller.
You see If I pull my stick in a direction I also want my character to face the same way. The Xbox controller gives me cordinates like in the UnitCircle, a X and Y value with the value +1 to -1 and 0 is standard position. And I need the angle in Radians to use it in XNA, please help me with this, I don't know how to convert from Unit Circle to Radians.
I found this http://en.wikipedia.org/wiki/Rotation_matrix but it was very hard to understand
Huge disclaimer: I'm working with ancient and forgotten trig, so please forgive me if I put you wrong!
First up, have a think about what the unit circle information is telling you. It looks something like this:
The orange (x,y) coordinates are telling you in which direction the controller has been moved.
The information tells you the height of the triangle, and the base of the triangle.
What you want to know is the angle at the tip of the triangle that touches the centre.
Another way of looking at our orange triangle is like this.
You want to know the angle in green, and you know the adjacent side (which is your x co-ordinate) and the opposite site (which is your y co-ordinate). A little SOHCAHTOA, and you can see that
tan(θ) = y / x
which we can solve with
θ = (inverse tan) (y/x)
It looks like C# uses radians in tan and atan.
I'd suggest spending some time here: http://www.mathsisfun.com/geometry/unit-circle.html to help get your head around some of these concepts.
Related
I'm not really like to post questions about problems without doing the research, but I'm close to give up, so I thought I give it a shot and ask you about my problem.
I want to create a custom collision detection in Unity ( So please don't advice "use rigidbody and\or colliders" because I don't want to use them by purpose).
The main idea: I want to detect Basic Sphere and Basic Box collision. I already find AABB vs Sphere theme with the following solution:
bool intersect(sphere, box) {
var x = Math.max(box.minX, Math.min(sphere.x, box.maxX));
var y = Math.max(box.minY, Math.min(sphere.y, box.maxY));
var z = Math.max(box.minZ, Math.min(sphere.z, box.maxZ));
var distance = Math.sqrt((x - sphere.x) * (x - sphere.x) +
(y - sphere.y) * (y - sphere.y) +
(z - sphere.z) * (z - sphere.z));
return distance < sphere.radius;
}
And this code does the job, the box bounding and the sphere center point with radius works fine, I can detect the Sphere collision on Box.
The problem is, I want to Rotating the Cube in Runtime, so that will screw up everything, the bounding will split away and the collision will gone (or collide on random places). I've read about some comments where they said, bounding not works with rotation, but I'm not sure what else can I use to solve this problem.
Can you help me with this topic please? I'll take every advice I can get (except Colliders & Rigidbodies of course).
Thank you very much.
You might try using the separating axis theorem. Essentially, for a polyhedron, you use the normal of each face to create an axis. Project the two shapes you are comparing onto each axis and look for an intersection. If there is no intersection along any of the axes, there is no intersection of shapes. For a sphere, you will just need to project onto the polyhedron's axes. There is a great 2D intro to this from metanet.
Edit: hey, check it out-- a Unity implementation.
A good method to find if an AABB (axis aligned bounding box) and sphere are intersecting is to find the closest point on the box to the sphere's center and determine if that point is within the sphere's radius. If so, then they are intersecting, if not then not.
I believe you can do the same thing with this more complicated scenario. You can represent a rotated AABB with a geometrical shape called a parallelepiped. You would then find the closest point on the parallelepiped to the center of the sphere and again check if that point exists within the sphere's radius. If so, then they intersect. If not, then not.
The difficult part is finding the closest point on the parallelepiped. You can represent a parallelepiped in code with 4 3d vectors: center, extentRight, extentUp, and extentForward. This is similar to how you can represent an AABB with a 3d vector for center along with 3 floats: extentRight, extentUp, and extentForward. The difference is that for the parallelepiped those 3 extents are not 1 dimensional scalars, but are full vectors.
When finding the closest point on an AABB surface to a given point, you are basically taking that given point and clamping it to the AABB's volume. You would, for example, call Math.Clamp(point.x, AABB.Min.x, AABB.Max.x) and so on for Y and Z.
The resulting X,Y,Z would be the closest point on the AABB surface to the given point.
To do this for a parallelepiped you need to solve the "linear combination" (math keyword) of extentRight(ER), extentUp(EU), and extentForward(EF) to get the given point. In other words, what scalars do you have to multiply ER, EU, and EF by to get to the given point? When you find those scalars you need to clamp them between 0 and 1 and then multiply them again by ER, EU, and EF respectively to get that closest point on the surface of the parallelepiped. Be sure to offset the given point by the Parallelepiped's min position so that the whole calculation is done in its local space.
I didn't want to spend any extra time learning how to solve for a linear combination (it seems it involves things like using an "augmented matrix" and "gaussian elimination") otherwise I'd include that here too. This should get you or anyone else reading this off to the right track hopefully.
Edit:
Actually I think its a lot simpler and you don't need a parallelepiped. If you have access to the rotation (Vector3 or Quaternion) that rotated the cube you could get the inverse of that and use that inverse rotation to orbit the sphere around the cube so that the new scenario is just the normal axis aligned cube and the orbited sphere. Then you can do a normal AABB - sphere collision detection.
I have a Camera Player in a scene. 3D Scene with 2d Compass GUI. Scene contains a fixed north which i want to indicate on my compass with needle but how can i do it?
I tried this code using a Tutorial so far but my needle movement direction is not appropriate according to my north.
public void ChangeNortDirection() {
northDirection.z = player.eulerAngles.y;
northLayer.localEulerAngles = northDirection;
}
You are setting the north direction to the y axis of the player, so if the player rotates so does the north direction.
What you want to do is have a variable indicating the absolute north direction and then just rotate the compass in the opposite direction the player has rotated.
Another thing is, is this in 2D or 3D space? In 2D you only want to rotate the z axis at all times and not touch the x and y axis. And in 3D the north direction should be defined in only the z axis so that only the x and y plane rotates.
For a definitive answer please provide more information.
After looking at the tutorial and seeing what you did I came up with an idea.
The only problem you have is that the pointer is not looking at north but at something else while it is still rotating with the character.
In your case you can easily add one simple value that adds up to the compass pointer's initial state.
The compass pointer will still rotate with your characters Y rotation but the compass pointer will point to a different direction than it normally would.
public float extraOffset; /* Edit this inside the editor on playmode until
your pointer points towards your north.
Then remember that value and set it while you are not in playmode.*/
public void ChangeNortDirection() {
northDirection.z = extraOffset + player.eulerAngles.y; // Just added the simple extraoffset variable and you are good to go.
northLayer.localEulerAngles = northDirection;
}
Also remember to explain your question as good as possible since it can be really hard for us to know what the exact problem is.
For example tell us if your game is going to be 2D or 3D and what the initial idea behind the subject of the problem is.
Hope this helps you out a bit.
Mennolp
I guess your question is related to the previous one : that's nice to see you're using Unity UI system now (but you could have given people over here a bit of feedback to their answers/comments) !
Regarding your current question, one of the best (and simplest) solution is to get your player's forward transform (blue axis in scene view) and to used it's X an Z values (Y doesn't matter here) to get the angle you want :
northLayer.localEulerAngles = new Vector3(0.0f, 0.0f, Mathf.Atan2(player.forward.x, player.forward.z) * Mathf.Rad2Deg);
This example is considering your northDirection vector is 0,0,1 and your northLayer is pointing north when its localEulerAngles.z is set to 0 (and of course that northLayer is a RectTransform component).
Hope this helps,
I'm asking a question that has been asked a million times before, but I still haven't found a good answer after going through these and also resorting to other sites:
How to rotate a graphic over global axes, and not to local axes?
rotating objects in opengl
How to rotate vertices exactly like with glRotatef() in OpenGL?
Rotate object about 3 axes in OpenGL
Rotate an object on its own axes in OpenGL
Is it possible to rotate an object around its own axis and not around the base coordinate's axis?
OpenGL Rotation - Local vs Global Axes
Task is relatively simple, I am making an sensor module with accelerometer and gyro onboard, and I need to display rotation.
I have a 3D model of the object in STL format, I've made an STL parser and I can visualise the object perfectly well.
When I try to rotate it I get all kinds of strange results.
I am using SharpGL (the Scene component).
I tried to write an "effect" that (for now) is controlled with 3 separate trackbars, but this will be the data from the sensors.
I wanted to do it as an effect because (in theory) I could have more objects in the scene and I only need to rotate a specific one (and also for educational purposes as I am relatively new to OpenGL).
I tried to rotate the object using quaternions, multiplying them and generating a rotation matrix, then applying gl.MultMatrix(transformMatrix.AsColumnMajorArray).
I also tried gl.Rotate(angleX, 1, 0, 0) (and similar for Y and Z axes).
Depending on the order of quaternion multiplication or gl.Rotate(...) statements, I would get the object to rotate about the local axis (first), something weird (second), and the "world" third. For example, if I did it with quaternions qX*qY*qZ and got the rotation matrix from that, I would get (I think) the rotations about a local X, "arbitrary" Y, and the world Z axes.
The part I cannot understand is that when I apply gl.Rotate(dx, 1, 0, 0) (and others for Y and Z rotations) inside the trackbar's Scroll event, the entire scene rotates about the X, Y, and Z axes and in that order, and exactly as I expected the rotation to happen.
My only issue with this approach is that the entire scene rotates. So if I (hypothetically) wanted to draw a room around the object, the whole room would rotate too, which is not what I want.
I will post any code requested, but I feel there is too much and the aforementioned links have similar code to what I have.
At the end of the day, I want my object (Polygon in SharpGL terms) to rotate about its own axes (or about the "world" axes, but be consistent).
EDIT:
I've placed the project in my dropbox: https://goo.gl/1LzNhn
If someone wants to look at it, any help will be greatly appreciated.
In the MainForm, the TbRotXYZScroll function has trackbar code and the Rotation class is where the problem resides (or so I think).
Rotate X, Y, and Z by say 45 degrees in that order. All seems to work great. Now rotate X, Y, and Z further by another 30 or so degrees, try to visualise about which axis the rotation is happening as you rotate...
X seems to be always world and Z seems to be always local, while Y is something in between (changing the order of rotation changes this).
Going into TbRotXYZScroll function and changing method to 2 does what I want (except the rotation of the entire scene). Tried quaternions and they produce the exact same result, so I must be doing something wrong...
Changing it to 1 produces similar result, but that is applying the rotation to the polygons alone (not what's being rendered as that includes local coordinate axes too).
At the end of the day, I want my object (Polygon in SharpGL terms) to
rotate about its own axes (or about the "world" axes, but be
consistent).
I think this answer you put in your question is somehow explaining the situation. In order to perform rotation around object axis:
1. Perform Translation/Rotation to your object and make the object axis overlap with one of base axis (x,y or z)
2. Perform your rotation
3. Revert your object back to its original position. (simply revert the translation/rotations at step1)
Note: While doing that, one thing you should consider is openGL apply transformations from the reverse order.
Example :
glRotatef(90,1,0,0);
glTranslatef(-5,-10,5);
drawMyObject();
In this case, opengl will first translate your object and then rotate. So while writing your transformations consider that. Here is my answer that may give you an idea.
The title says it.
This link(Wikipedia Midpoint Circle Algorithm) shows how to get points for a full circle, now I need it to be for a semi circle(an arc?).
It semi circle should be facing up, like in this image(Check this image)
But the bottom of the circle should be open!
For those who might think this is homework, it's not.
I'm working on a game in Xna and I want the 'rocket' that comes out of the rocket launcher to go through a certain path, a semicircle.
I don't think that algorithm is the right way to do what you want to do: describe a path. That algorithm is for plotting pixels along the path rather than describing it as such. Instead, trigonometry is probably what you want. Increase the angle from one point to another, step by step. A circle is 2π radians, so half a circle is π radians.
This should provide what you need to describe the arc. http://mathworld.wolfram.com/Trigonometry.html
Just use the Midpoint Circle Algorithm for values where y > 0 (assuming that the circle's midpoint is at (0, 0)). Anything below that is "open"
It might be that my math is rusty or I'm just stuck in my box after trying to solve this for so long, either way I need your help.
Background: I'm making a 2d-based game in C# using XNA. In that game I want a camera to be able to zoom in/out so that a certain part of objects always are in view. Needless to say, the objects move in two dimensions while the camera moves in three.
Situation: I'm currently using basic trigonometry to calculate which height the camera should be at for all objects to show. I also position the camera between those objects.
It looks something like this:
1.Loop through all objects to find the outer edges of our objects : farRight, farLeft, farUp, farDown.
2.When we know what the edges of what has to be shown are, calculate the center, also known as the camera position:
CenterX = farLeft + (farRight - farLeft) * 0.5f;
CenterY = farUp + (farDown - farUp) * 0.5f;
3.Loop through our edges to find the largest value compared to our camera position, thus the furthest distance from the center of screen.
4.Using the largest distance-value we can easily calculate the needed height to show all of those objects (points):
float T = 90f - Constants.CAMERA_FIELDOFVIEW * 0.5f;
float height = (float)Math.Tan(MathHelper.ToRadians(T)) * (length);
So far so good, the camera positions itself perfectly based on the calculations.
Problem:
a) My rendering target is 1280*720 with a Field of View of 45 degrees, so one always sees a bit more on the X-axis, 560 pixels more actually. This is not a problem per se but more one that on b)...
b) I want the camera to be a bit further out than it is, so that one sees a bit more on what is happening beyond the furthest point. Sure, this happens on the X-axis, but that is technically my flawed logic's result. I want to be able to see more on both the X- and Y-axis and to control this behavior.
Question
Uhm, so to clarify. I would like to have some input on a way to make the camera position itself, creating this state:
Objects won't get closer than say... 150 pixels to the edge of the X-axis and 100 pixels to the edge of the Y-axis. To do this the camera shall position itself along the Z-axis so that the field of view covers it all.
I don't need help with the coding, just the math and logic of calculating the height of my camera. As you probably can see, I have a hard time wrapping this inside my head and even harder time trying to explain it to you.
If anyone out there has been dealing with this or is just better than me at math, I'd appreciate whatever you have to say! :)
Don't you just need to add or subtract 150 or 100 pixels (depending on which edge you are looking at) to each distance measurement in your loop at step 3 and carry this larger value into length at step 4? Or am I missing something.
I can't explore this area further at the moment, but if anyone is having the same issue but is not satisfied by provided answer there is another possibility in XNA.
ViewPort.Unproject()
This nifty feature converts a screen space coordinate to a world space one.
ViewPort.Project()
Does the opposite thing, namely converting world space to screen space. Just thought that someone might want to go further than me. As much as my OCD hates to leave things not perfect, I can't be perfectioning this... yet.