I have a camera that needs to orbit locally around an object. This object has an arbitrary rotation, described by a normal vector. Imagine a spherical planet, with a camera looking down at a certain triangle on that planet.
My current implementation is to use the classic vector crossing method to generate a rotation matrix from the triangle's normal, then use that matrix as the basis for the standard orbit camera. This works fine near the equator of the planet, but once it gets near the poles, it starts blowing up, with the camera behaving increasingly erratically the closer that it gets to the very center of the pole.
I've determined that this is due to the first vector cross, as the two vectors are close to one another in that case - I'm not sure what the technical name for the phenomena is. If first vector is 0,1,0, the craziness happens when the normal is close to 0, 1, 0 or 0, -1, 0.
I've found quite a few descriptions of this problem, but no working solutions. The closest I've come was here: http://xboxforums.create.msdn.com/forums/p/13278/13278.aspx It mentions that to handle the 'singularity', use a different vector when it is detected. I can easily determine when the camera is on planet face that will cause this to happen (as my planet sphere is generated from 6 quadtrees projected to spherical coordinates), but there is a very noticeable snap when I switch to a new vector.
Here's the current code:
Vector3 triNormal; //the current normal of the target vertex
Vector3 origin = Vector3.Forward;
Matrix orientation.Forward = origin;
orientation.Up = triNormal;
orientation.Right = Vector3.Cross(orientation.Up, orientation.Forward);
orientation.Right.Normalize();
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
I've experimented with detecting when triNormal is on one of the pole faces, and setting 'origin' to something else such as Right. The camera then behaves properly once it is on the face, but is immediately snapped to a new rotation as it crosses over. This makes sense, as its reference vector has just changed, but needs to be eliminated for a smooth user experience. I tried figuring out how to offset the camera's yaw for the orbit camera to counteract the new coordinate system, but it doesn't seem to be a constant value, depending on where on the sphere the camera is currently aiming. I'm not sure how I could calculate what the difference is.
Also note that as it's in XNA and C#, I'm using a right-hand coordinate system.
I don't understand why you do this:
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
when you've already used previous orientation.Forward to get orientation.Right.
(If you are "crossing" normal vector I don't think you'll need to normalize them.)
Anyway, if triNormal is the current normal of the target vertex, and your camera is looking down to it, I think you should have:
orientation.Forward = -triNormal
Related
I'm trying to set a child gameObject equal a mix of the localRotation of the root and 0's of the world.
(This gameObject is a gun of sorts that the player character holds, like a fireflower, and out of the face of this object fireballs are released)
Essentially I want this item to always face forwards (so it faces towards where my character is facing). When I do:
this.transform.rotation = Quaternion.Euler(0,0,0);
the X and Z are correct, but the Y value always stays facing the same forward direction no matter where the character rotates.
I've messed around and found that
this.transform.localRotation = Quaternion.Euler (270, 180, 90);
works slightly, but it is off by an amount that would require me to do a lot of trial and error with and I would rather work with non hardcoded values.
In the end I want to have a localRotation equal to (0 of World, parentRotationY, 0 of World), but I can't get it to work.
So is there any way to mix a localRotation and a world rotation?
Also if I'm not providing enough information just let me know.
I'm trying to make a spherical burst of rays for the purpose of checking collision, but having specific interactions happen based upon what or where each ray hit. Hence why I'm using rays rather then something simpler such as OverlapSphere.
The reason I'm looking for how to make a sphere is because I can use the same math for my rays, by having them go to the vertices of where the sphere would be. But every way I can find for making a sphere has the lines get closer the near to the poles, which makes sense, as its pretty easy to do. But as you can imagine, its not that useful for my current project.
TL;DR:
How do I make a sphere with equidistant vertices? If its not perfectly equidistant its fine, it just needs to pretty close. If this happens, it would be great if you could give how much the difference would be, and where, if applicable.
Extra notes:
I've looked at this and this, but the math is way over my head, so what I've been looking for might've just been staring me in the face this whole time.
You could use an icosphere. As the vertices are distributed on equilateral triangles, your vertices are guaranteed to be equidistant.
To construct the icosphere, first you make an icosahedron and then split the faces recursively in smaller triangles as explained in this article.
Are you aware that the sphere given to you by Unity is in fact designed
with this exact goal in mind?
ie, the entire raison d'etre of the sphere built-in to Unity is that the points are fairly smoothly space ...... roughly equidistant, as you phrase it.
To bring up such a sphere in Unity, just do this:
You can then instantly get access to the verts, as you know
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vv = mesh.vertices;
int kVerts=vv.Length
for (int i=0; i<kVerts; ++i)
Debug.Log ... vv[i]
Note you can easily check "which part of the sphere" they are on by (for example) checking how far they are from your "cities" (or whatever) or just check (for example) the z values to see which hemisphere they are in .. et cetera.
Furthermore...
Please note. Regarding your overall reason for wanting to do this:
but having specific interactions happen based upon what or where each ray hit
Note that it could not be easier to do this using PhysX. (The completely built-in game physics in Unity.) Indeed, I have never, ever, looked at a collision without doing something "specific" depending on "where it hit!"
You can for example get the point where the contact was with http://docs.unity3d.com/ScriptReference/RaycastHit-point.html
It's worth noting it is absolutely inconceivable one could write something approaching the performance of PhysX in casual programming.
I hope this makes things easier!
slice the sphere into N circles
compute perimeter of it
divide it by the same angle that create the slice
this gives you the number of vertexes
and also angle step inside circle
cast rays
This is how I coded it in C++ + OpenGL:
// draw unit sphere points (r=1 center=(0,0,0)) ... your rays directions
int ia,na,ib,nb;
double x,y,z,r;
double a,b,da,db;
na=16; // number of slices
da=M_PI/double(na-1); // latitude angle step
for (a=-0.5*M_PI,ia=0;ia<na;ia++,a+=da) // slice sphere to circles in xy planes
{
r=cos(a); // radius of actual circle in xy plane
z=sin(a); // height of actual circle in xy plane
nb=ceil(2.0*M_PI*r/da);
db=2.0*M_PI/double(nb); // longitude angle step
if ((ia==0)||(ia==na-1)) { nb=1; db=0.0; } // handle edge cases
for (b=0.0,ib=0;ib<nb;ib++,b+=db) // cut circle to vertexes
{
x=r*cos(b); // compute x,y of vertex
y=r*sin(b);
// this just draw the ray direction (x,y,z) as line in OpenGL
// so you can ignore this
// instead add the ray cast of yours
double w=1.2;
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
}
This is how it looks like:
R,G,B lines are the sphere coordinate system axises X,Y,Z
White-ish lines are your Vertexes (White) + direction (Gray)
[Notes]
do not forget to include math.h
and replace the OpenGL stuff with yours
If you want 4, 6, 8, 12 or 20 vertices then you can have exactly equidistant vertices as the Platonic solid which all fit inside a sphere. The actual coordinates of these should be easy to get. For other numbers of vertices you can use other polyhedra and scale the verties so they lie on a sphere. If you need lots of points then a geodesic dome might be a good base. The C60 bucky-ball could be a good base with 60 points. For most of these you should be able to find 3D models from which you can extract coordinates.
I think the easiest way to control points on a sphere is by using spherical coordinates. Then you can control position of points around the sphere by using two angles (rho and phi) and the radius.
Example code for filling points uniformly around a rotating sphere (for fun):
var time = 1; // Increment this variable every frame to see the rotation
var count = 1000;
for (int i = 0; i < count; i++)
{
var rho = time + i;
var phi = 2 * Math.PI * i / count;
var x = (float)(radius * Math.Sin(phi) * Math.Cos(rho));
var z = (float)(radius * Math.Sin(phi) * Math.Sin(rho));
var y = (float)(radius * Math.Cos(phi));
Draw(x, y, z); // your drawing code for rendering the point
}
As some answers have already suggested, use an icosahedron based solution. The source for this is quite easy to come by (and I have written my own several times) but I find the excellent Primitives Pro plugin extremely handy under many other circumstances, and always use their sphere instead of the built-in Unity one.
Link to Primitives Pro component
Primitives Pro options
I'm working on a first person 3D game. The levels are entirely cube based, walls/floors/etc are all just tiled cubes (1x1x1).
I'm currently creating a ray using the camera position and the rotation of the camera to get the direction. I'm wanting to then ray cast out to the first cube that is not empty (or when the ray falls off the grid). Quite often, these are direction vectors such as 0,0,1 or 1,0,0.
I'm not having much luck in finding a Bresenham Line Drawing Algorithm that works with a direction vector rather than a start/end point. Especially considering the direction vector is not going to house integers only.
So, for a specific question, I guess I'm asking if anyone can explain if I'm even coming close to going about this the right way and if someone might go into detail about how it should be done regardless.
Bresenham won't help you here, I'm afraid...what you need are Ray/Line-Plane intersection algorithms:
Line-Plane intersection on Wikipedia
Line-Plane intersection on Wolfram
Ray-Plane intersection on SigGraph
In very rough mathy-pseudocode:
(caveat:It's been a long time since I've done 3d graphics)
// Ray == origin point + some distance in a direction
myRay = Porg + t*Dir;
// Plane == Some point on cube face * normal of cube face (facing out),
// at some distance from origin
myPlane = Pcube * Ncubeface - d;
// Straight shot: from ray origin to point on cube direction
straightTo = (Pcube - Porg);
With these two equations, you can infer some things:
If the dot product of 'straightTo' and the plane normal is zero (call this "angA"), your origin point is inside the face of the cube.
If the dot product of the ray direction and the plane normal is close to 0 (call this "angB"), the ray is running parallel to the face of the cube - that is, not intersecting (unless you count if the origin is in the cube face, above).
If (-angA / angB) < 0, your ray is pointing away from the cube face.
There's other stuff, too, but I'm already pressing the limits of my meager memory. :)
EDIT:
There might be a "shortcut", now that I think it though a bit...this is all assuming you're using a 3-d array-like structure for your "map".
Ok, so bear with me here, thinking and typing on my phone - what if you used the standard old Bresenham delta-error algorithm, but "fixed" it into 2D?
So let's say:
We are at position (5, 5, 5) in a (10x10x10) "box"
We are pointing (1 0 0) (i.e., +X axis)
A ray cast from the upper-left corner of our view frustrum is still just a line; the definitions for "x" and "y" change, is all
"X" in this case would be (mental visualization)...say along the axis parallel to the eye line, but level with the cast line; that is, if we were looking at a 2D image that was (640x480), "center" was (0,0) and the upper left corner was (-320,-240), this "X axis line" would be a line cast through the point (-320,0) into infinity.
"Y" would likewise be a projection of the normal "Y" axis, so...pretty much the same, unless we're tilting our heads.
Now, the math would get hairy as hell when trying to figure out what the next deltaX value would be, but once you'd figured out the formula, it'd be basically constant time calculations (and now that I think about it, the "X axis" is just going to be the vector <1 0 0> projected through whatever your camera projection is, right?
Sorry for the rambling, I'm on the train home. ;)
I'm a real noob who just started learning 3d programming and i have a really hard time learning about rotation in 3D space. My problem is that I can't seem to figure out how to rotate an object using it's local coordinates.
I have a basic class for 3d objects and, for starters, i want to implement functions that will rotate the object on a certain axis with x degrees. So far i have the following:
public void RollDeg(float angle)
{
this.rotation = Matrix4.Mult(rotation,
Matrix4.CreateRotationX(MyMath.Conversions.DegToRad(angle)));
}
public void PitchDeg(float angle)
{
this.rotation = Matrix4.Mult(rotation,
Matrix4.CreateRotationY(MyMath.Conversions.DegToRad(angle)));
}
public void YawDeg(float angle)
{
this.rotation = Matrix4.Mult(rotation,
Matrix4.CreateRotationZ(MyMath.Conversions.DegToRad(angle)));
}
'rotation' is a 4x4 matrix which starts as the identity matrix. Each time i want to roll/pitch/yaw the object, i call one of the functions above.
for drawing, i use another function that pushes a matrix onto the ModelView stack, multiplies it with the translation, rotation and scale matrices of the object (in this order) and begins drawing the vertices. ofcourse, finally i pop the matrix off the stack.
the problem is that the functions above rotate the object on the GLOBAL axis, not on the LOCAL ones, even if, from my understanding, every time you rotate an object, the local system changes it's axis and then, when a new rotation is applyied on top of the others, the local axis are used for the new one.
i read different tutorials about the math behind it and how to rotate objects, but i couldn't find one the could help me.
if anyone has the time, i would really appreciate if he could help me understand HOW to rotate around local axis and, maybe even more important, what i did wrong on my current implementation.
If you want to perform your transformations in this order : translation -> rotation -> scale (which makes perfectly sense, it's what's wanted usually), you have to multiply your matrices in the reverse order.
In a right-handed coordinate system (i.e. the one openGL uses), matrix multiplication must be performed from right to left. This is why :
ModelViewTransform = Transform * View * Model // <- you begin by the model, right ? so it's this way
Note that in directX they use a left-handed coordinate system. It has his shortcomings, but it's more intuitive.
I am currently experimenting with some physics toys in XNA using the Farseer Physics library, however my question isn't specific to XNA or Farseer - but to any 2D physics library.
I would like to add "rocket"-like movement (I say rocket-like in the sense that it doesn't have to be a rocket - it could be a plane or a boat on the water or any number of similar situations) for certain objects in my 2D scene. I know how to implement this using a kinematic simulation, but I want to implement it using a dynamic simulation (i.e. applying forces over time). I'm sort of lost on how to implement this.
To simplify things, I don't need the dynamics to rotate the geometry, just to affect the velocity of the body. I'm using a circle geometry that is set to not rotate in Farseer, so I am only concerned with the velocity of the object.
I'm not even sure what the best abstraction should be. Conceptually, I have the direction the body is currently moving (unit vector), a direction I want it to go, and a value representing how fast I want it to change direction, while keeping speed relatively constant (small variations are acceptable).
I could use this abstraction directly, or use something like a "rudder" value which controls how fast the object changes directions (either clockwise or counter clockwise).
What kind of forces should I apply to the body to simulate the movement I'm looking for? Keep in mind that I would also like to be able to adjust the "thrust" of the rocket on the fly.
Edit:
The way I see it, and correct me if I'm wrong, you have two forces (ignoring the main thrust force for now):
1) You have a static "fin" that is always pointed in the same direction as the body. If the body rotates such that the fin is not aligned with the direction of movement, air resistance will apply forces to along the length of the fin, proportional to the angle between the direction of movement and the fin.
2) You have a "rudder", which can rotate freely within a specified range, which is attached some distance from the body's center of mass (in this case we have a circle). Again, when this plane is not parallel to the direction of movement, air resistance causes proportional forces along the length of the rudder.
My question is, differently stated, how do I calculate these proportional forces from air resistance against the fin and rudder?
Edit:
For reference, here is some code I wrote to test the accepted answer:
/// <summary>
/// The main entry point for the application.
/// </summary>
static void Main(string[] args)
{
float dc = 0.001f;
float lc = 0.025f;
float angle = MathHelper.ToRadians(45);
Vector2 vel = new Vector2(1, 0);
Vector2 pos = new Vector2(0, 0);
for (int i = 0; i < 200; i++)
{
Vector2 drag = vel * angle * dc;
Vector2 sideForce = angle * lc * vel;
//sideForce = new Vector2(sideForce.Y, -sideForce.X); // rotate 90 degrees CW
sideForce = new Vector2(-sideForce.Y, sideForce.X); // rotate 90 degrees CCW
vel = vel + (-drag) + sideForce;
pos = pos + vel;
if(i % 10 == 0)
System.Console.WriteLine("{0}\t{1}\t{2}", pos.X, pos.Y, vel.Length());
}
}
When you graph the output of this program, you'll see a nice smooth circular curve, which is exactly what I was looking for!
If you already have code to integrate force and mass to acceleration and velocity, then you just need to calculate the individual part of each of the two elements you're talking about.
Keeping it simple, I'd forget about the fin for a moment and just say that anytime the body of your rocket is at an angle to it's velocity, it will generate a linearly increasing side-force and drag. Just play around with the coefficients until it looks and feels how you want.
Drag = angle*drag_coefficient*velocity + base_drag
SideForce = angle*lift_coefficent*velocity
For the rudder, the effect generated is a moment, but unless your game absolutely needs to go into angular dynamics, the simpler thing to do is let the rudder control put in a fixed amount of change to your rocket body angle per time tick in your game.
I suddenly "get" it.
You want to simulate a rocket powered missile flying in air, OK. That's a different problem than the one I have detailed below, and imposes different limits. You need an aerospace geek. Or you could just punt.
To do it "right" (for space):
The simulated body should be provided with a moment of inertia around its center of mass, and must also have a pointing direction and an angular velocity. Then you compute the angular acceleration from the applied impulse and distance from the CoM, and add that to the angular velocity. This allows you to compute the current "pointing" of the craft (if you don't use gyros or paired attitude jets, you also get a (typically very small) linear acceleration).
To generate a turn, you point the craft off the current direction of movement and apply the main drive.
And if you are serious about this you also need to subtract the mass of burned fuel from the total mass and make the appropriate corrections to the moment of inertia at each time increment.
BTW--This may be more trouble than it is worth: maneuvering a rocket in free-fall is tricky (You may recall that the Russians bungled a docking maneuver at the ISS a few years ago; well, that's not because they are stupid.). Unless you tell us your use case we can't really advise you on that.
A little pseudocode to hint at what you're getting into here:
rocket {
float structuralMass;
float fuelMass;
point position;
point velocity;
float heading;
float omega; // Angular velocity
float structuralI; // moment of inertia from craft
float fuelI; // moemnt of inertia from the fuel load
float Mass(){return struturalMass + fuelMass};
float I(){return struturalI + fuelI};
float Thrust(float t);
float AdjustAttitude(float a);
}
The upshot is: maybe you want a "game physics" version.
For reason I won't both to go into here, the most efficient way to run a "real" rocket is generally not to make gradual turns and slow acceleration, but to push hard when ever you want to change direction. In this case you get the angle to thrust by subtracting the desired vector (full vector, not the unit) from the current one. Then you pointing in that direction, and trusting all out until the desired course is reached.
Imagine your in floating in empty space... And you have a big rock in your hand... If you throw the rock, a small impulse will be applied to you in the exact opposite direction you throw the rock. You can model your rocket as something that rapidly converts quantum's of fuel into some amount of force (a vector quantity) that you can add to your direction vector.