I know this is probably a very simple question, but I can't seem to figure it out. First of all, I want to specify that I did look over Google and SO for half an hour or so without finding the answer to my question(yes, I am serious).
Basically, I want to rotate a Vector2 around a point(which, in my case, is always the (0, 0) vector). So, I tried to make a function to do it with the parameters being the point to rotate and the angle(in degrees) to rotate by.
Here's a quick drawing showing what I'm trying to achieve:
I want to take V1(red vector), rotate it by an angle A(blue), to obtain a new vector (V2, green). In this example I used one of the simplest case: V1 on the axis, and a 90 degree angle, but I want the function to handle more "complicated" cases too.
So here's my function:
public static Vector2 RotateVector2(Vector2 point, float degrees)
{
return Vector2.Transform(point,
Matrix.CreateRotationZ(MathHelper.ToRadians(degrees)));
}
So, what am I doing wrong? When I run the code and call this function with the (0, -1) vector and a 90 degrees angle, I get the vector (1, 4.371139E-08) ...
Also, what if I want to accept a point to rotate around as a parameter too? So that the rotation doesn't always happen around (0, 0)...
Chris Schmich's answer regarding floating point precision and using radians is correct. I suggest an alternate implementation for RotateVector2 and answer the second part of your question.
Building a 4x4 rotation matrix to rotate a vector will cause a lot of unnecessary operations. The matrix transform is actually doing the following but using a lot of redundant math:
public static Vector2 RotateVector2(Vector2 point, float radians)
{
float cosRadians = (float)Math.Cos(radians);
float sinRadians = (float)Math.Sin(radians);
return new Vector2(
point.X * cosRadians - point.Y * sinRadians,
point.X * sinRadians + point.Y * cosRadians);
}
If you want to rotate around an arbitrary point, you first need to translate your space so that the point to be rotated around is the origin, do the rotation and then reverse the translation.
public static Vector2 RotateVector2(Vector2 point, float radians, Vector2 pivot)
{
float cosRadians = (float)Math.Cos(radians);
float sinRadians = (float)Math.Sin(radians);
Vector2 translatedPoint = new Vector2();
translatedPoint.X = point.X - pivot.X;
translatedPoint.Y = point.Y - pivot.Y;
Vector2 rotatedPoint = new Vector2();
rotatedPoint.X = translatedPoint.X * cosRadians - translatedPoint.Y * sinRadians + pivot.X;
rotatedPoint.Y = translatedPoint.X * sinRadians + translatedPoint.Y * cosRadians + pivot.Y;
return rotatedPoint;
}
Note that the vector arithmetic has been inlined for maximum speed.
So, what am I doing wrong? When I run the code and call this function with the (0, -1) vector and a 90 degrees angle, I get the vector (1, 4.371139E-08) ...
Your code is correct, this is just a floating point representation issue. 4.371139E-08 is essentially zero (it's 0.0000000431139), but the transformation did not produce a value that was exactly zero. This is a common problem with floating point that you should be aware of. This SO answer has some additional good points about floating point.
Also, if possible, you should stick with radians instead of using degrees. This is likely introducing more error into your computations.
Related
I want to make a bot that will walk along points from two coordinates (X and Y), I have the coordinates of the character, his rotation angle (1-180 / (- 1) - (-180)), and the required point where he is should get there. How do I know if the angle of rotation is necessary for the person to look directly at the point?
I have been sitting with this for a long time, and my head refuses to think at all, I tried to solve this by creating an angle between the radius vector, but nothing came of it.
public static double GetRotation(Point Destination)
{
double cos = Destination.Y / Math.Sqrt(Destination.X * Destination.X + Destination.Y * Destination.Y);
double angle = Math.Acos(cos);
return angle;
}
With regard to this problem, I would suggest generally using tan rather than cos due to the fact to get the angle between two point (X and y), the difference in the points provide the opposite and adjacent sides of the triangle for the calculation, rather than adjacent and hypotenuse.
If the player is at position X,Y, and the new position to walk to is X2,Y2, the equation to calculate the relative angle between them should be tan-1((Y2-Y)/(X2-X)) with regards to the X plane.
I think the below implementation should work for your example (though you may need to subtract the current player angle the player is facing to get the difference):
public static double GetRotation(Point source, Point Destination)
{
double tan = (Destination.Y - source.Y) / (Destination.X -source.X);
double angle = Math.Atan(tan) * (180/Math.PI) // Converts to degrees from radians
return angle;
}
Apologies for the formatting, am currently on mobile.
dx = Destination.X - Person.X;
dy = Destination.Y - Person.Y;
directionangle = Math.Atan2(dy, dx); // in radians!
degrees = directionangle * 180 / Math.PI;
May I ask you for piece of advice writing my raytracer in c#. Here's the rpoblem:
I've got a problem detecting hits of the rays and geometry in my raytracer. I've implemented several functions based on this articles: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/ray-triangle-intersection-geometric-solution
Here's the code of calculating hitpoint:
public Vector3 GetHitPoint(Vector3 origin, Vector3 ray)
{
float D = Vector3.Dot(this.GetNormal(), vertices[0]);
float up = Vector3.Dot(this.GetNormal(), origin);
up = -1 * (up + D); //50
float down =Vector3.Dot(this.GetNormal(), ray); //0.999975
float t = up / -1* down; //50,00125003125078
Console.WriteLine(origin + Vector3.Multiply(t, ray));
return origin + Vector3.Multiply(t, ray);
}
Not very elegant, but shall work. I've got the problem with precision. I've prepared test triangle that is located perpendicular to camera (center -> 0,0,50). The code calculates the point of intersection between triangle and ray.
Origin stands for camera position, ray is the normalized vector that comes from camera to geometry, vertices[0] is the position of vertex and GetNormal() function gives correct normal vector of triangle. The problem in this case is the precision of calculation. After doing all these calculation, my hitpoint has the z coordinate of 49.99975 instead of 50.0.
This is a problem, because I use another algorithm (based on baricentric coordinates) in order to check if hitpoint is inside the triangle.
public bool CheckHitPoint(Vector3 P)
{
Vector3 v0 = vertices[1] - vertices[0];
Vector3 v1 = vertices[2] - vertices[0];
Vector3 v2 = P - vertices[0];
float d00 = Vector3.Dot(v0, v0);
float d01 = Vector3.Dot(v0, v1);
float d11 = Vector3.Dot(v1, v1);
float d20 = Vector3.Dot(v2, v0);
float d21 = Vector3.Dot(v2, v1);
float denom = d00 * d11 - d01 * d01;
float v = (d11 * d20 - d01 * d21);
float w = (d00 * d21 - d01 * d20);
float u = 1.0f - v - w;
if (u < 0 || v < 0 || w < 0)
{
return false;
}
else if (u > 1 || v > 1 || w > 1)
{
return false;
}
else return true;
}
The conditions in algorithm and in the article is the same, but because of inaccurate result of previous function, u v w coefficients are completely wrong (since hitpoint is actually in front of the triangle).
Can I ask you to help me fixing the precision issue in the first algorithm, or introduce some kind of bias into the second algorithm, so I can get the precise hitpoint and successfully detect it inside triangle?
EDIT: Sorry, I thought that problem is preety much clear. Let me explain it deeper. Let's see the triangle:
Vector3[] vertices =
{
new Vector3(-5,-5,50),
new Vector3(5,-5,50),
new Vector3(0,5,50)
};
Vector3[] normals =
{
new Vector3(0,0,-1),
new Vector3(0,0,-1),
new Vector3(0,0,-1)
};
It is clear that Normal Vector for this triangle is n(0,0,-1) and in combination with any point on it's surface, it may describe mathematically the surface.
D is the distance between between (0,0,0) and point on surface, that the triangle lies on. Since the surface may be described as a pair of parameters (normal vector and any point on surface), dot product of these parameter describes D.
Next four lines describes the equation:
t = N(A,B,C)⋅O + D/−N(A,B,C)⋅R
where
N(A,B,C) - normal vector of triangle
O - camera Position
D - Distance from (0,0,0) to surface
R - Normalized Ray Vector
Equation calculates the distance from camera position to point of intersection.
Following these equations, with triangle parameters I attached and camera position (0,0,0) poinitng at (0,0,50) the return value shall be the point with coordinates (x,y,50), no matter what is the pixel I create the ray for.
The problem is, vector methods in c# generally use floats for computations, and this is why the z coordinate is close, but not precisely 50.
Mathematically this is 100% correct, but the precision is poor.
This makes a problem, when I try to check, if the point lies inside triangle using transformation to baricentric coordinates.
The second method is ok mathematically too. Provided that Hitpoint is on the surface, if all coordinates are between 0 and 1 that means that point lies on the triangle. Details here: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
If only I could precisely count the coordinates from previous method, that would work. The problem is, because of lack of precision, the hitpoint is slightly on top or below the surface, and baricentric coordinates have crazy values.
The question is, how can I make the first method to be more precise, that means reproduce the t - distance between camera and hitpoint to be enough precise to be 50. What solution would be the best, rounding, creating custom methods for vectors replacing the built-in float-based methods or maybe some algorithm modification?
Would be grateful, if someone with experience with raytracers would gave me the piece of advice.
I'm trying to write a function to handle movement within a game I'm programming. What I have nearly works, but there are a couple situations where it breaks down.
I've coded up a minimal demonstrative example, presented below. In this example, I'm trying to calculate the travel of an object, represented by a point, and movement vector. This object's movement path is checked against a collection of polygons, which are broken down into line segments for testing. When this object collides with a line segment, I want it to slide along that segment (rather than stop or bounce away).
To do this, I check along my intended path for collisions, and if I find an intersection, I do a new test from that intersection point along the path of the line segment I've collided with, with the magnitude of the remainder of movement.
The problem arises when we slide along a line segment into a "pocket". Often times, the collision check will pass on both of the line segments that form the pocket, and the object will slip through. Because I'm travelling parallel to one of the line segments, and I'm intersecting with both line segments at an end points, I believe this issue is caused by floating point error. Whether or not it slips through, is caught, or is caught once and then slips through on the second check seems to be totally random.
I'm calculating intersection using a simple algorithm I found here: https://stackoverflow.com/a/20679579/4208739, but I've tried many other algorithms as well. All exhibit the same problems.
(Vector2 is class provided by the Unity library, it just holds x and y coordinates as floats. The Vector2.Dot function just calculates the dot product).
//returns the final destination of the intended movement, given the starting position, intended direction of movement, and provided collection of line segments
//slideMax provides a hard cap on number of slides allowed before we give up
Vector2 Move(Vector2 pos, Vector2[] lineStarts, Vector2[] lineEnds, Vector2 moveDir, int slideMax)
{
int slideCount = 0;
while (moveDir != Vector2.zero && slideCount < slideMax)
{
pos = DynamicMove(pos, lineStarts, lineEnds, moveDir, out moveDir);
slideCount++;
}
return pos;
}
//returns what portion of the intended movement can be performed before collision, and the vector of "slide" that the object should follow, if there is a collision
Vector2 DynamicMove(Vector2 pos, Vector2[] lineStarts, Vector2[] lineEnds, Vector2 moveDir, out Vector2 slideDir)
{
slideDir = Vector2.zero;
float moveRemainder = 1f;
for (int i = 0; i < lineStarts.Length; i++)
{
Vector2 tSlide;
float rem = LineProj(pos, moveDir, lineStarts[i], lineEnds[i], out tSlide);
if (rem < moveRemainder)
{
moveRemainder = rem;
slideDir = tSlide;
}
}
return pos + moveDir * moveRemainder;
}
//Calculates point of collision between the intended movement and the passed in line segment, also calculate vector of slide, if applicable
float LineProj(Vector2 pos, Vector2 moveDir, Vector2 lineStart, Vector2 lineEnd, out Vector2 slideDir)
{
slideDir = new Vector2(0, 0);
float start = (lineStart.x - pos.x) * moveDir.y - (lineStart.y - pos.y) * moveDir.x;
float end = (lineEnd.x - pos.x) * moveDir.y - (lineEnd.y - pos.y) * moveDir.x;
if (start < 0 || end > 0)
return 1;
//https://stackoverflow.com/a/20679579/4208739
//Uses Cramer's Rule
float L1A = -moveDir.y;
float L1B = moveDir.x;
float L1C = -(pos.x *(moveDir.y + pos.y) - (moveDir.x + pos.x)*pos.y);
float L2A = lineStart.y - lineEnd.y;
float L2B = lineEnd.x - lineStart.x;
float L2C = -(lineStart.x * lineEnd.y - lineEnd.x * lineStart.y);
float D = L1A * L2B - L1B * L2A;
float Dx = L1C * L2B - L1B * L2C;
float Dy = L1A * L2C - L1C * L2A;
if (D == 0)
return 1;
Vector2 inter = new Vector2(Dx / D, Dy / D);
if (Vector2.Dot(inter - pos, moveDir) < 0)
return 1;
float t = (inter - pos).magnitude / moveDir.magnitude;
if (t > 1)
return 1;
slideDir = (1 - t) * Vector2.Dot((lineEnd - lineStart).normalized, moveDir.normalized) * (lineEnd - lineStart).normalized;
return t;
}
Is there some way to calculate collision that isn't susceptible to this sort of problem? I imagine I can't totally eradicate floating point error, but is there a way to check that will at least guarantee I collide with ONE of the two line segments at the pocket? Or is there something more fundamentally wrong with going about things in this way?
If anything is unclear I can draw diagrams or write up examples.
EDIT: Having reflected on this issue more, and in response to Eric's answer, I'm wondering if converting my math from floating point to fixed point could solve the issue? In practice I'd really just be converting my values (which can fit comfortably in the range of -100 to 100) to ints, and then performing the math under those constraints? I haven't pieced all the issues together quite yet, but I might give that a try. If anyone has any information about anything like that, I'd be appreciative.
You have a line that, ideally, is aimed exactly at a point, the endpoint of a segment. That means any error in calculation, no matter how small, could say the line misses the point. I see three potential solutions:
Analyze the arithmetic and design it to ensure it is done with no error, perhaps by using extended-precision techniques.
Analyze the arithmetic and design it to ensure it is done with a slight error in favor of collision, perhaps by adding a slight bias toward collision.
Extend the line segment slightly.
It seems like the third would be easiest—the two line segments forming a pocket could just be extended by a bit, so they cross. Then the sliding path would not be aimed at a point; it would be aimed at the interior of a segment, and there would be margin for error.
I am trying to create a function that will take a point, and a distance and give me a radon location in the circle in that distance.. example
SpawnPlanet(PlanetToOrbitAround, Distance 200 px) returns a point in the "circle" of that 200 pixels away from the planet.
I'm also looking for the actual rotation logic, so one I spawn planet I have a method
UpdateRotation(PlanetToOrbitAround, OrbitingPlanet, 200 px distance, 5 degrees of speed)
I'm terrible at math, I've found some examples and everything I find doesn't seem to work for me(probably because of my lack of understanding of the math involved). The rotation seems to work for a planet around a sun, but not a moon around a planet
public Vector2 RotateAboutOrigin(Vector2 point, Vector2 origin, float rotation)
{
return Vector2.Transform(point - origin, Matrix.CreateRotationZ(rotation)) + origin;
}
is the logic I'm using.. with calls as such...
mPlanetLocation = RotateAboutOrigin(mPlanetLocation, new Vector2(GraphicsDevice.Viewport.Width / 2 - 25, GraphicsDevice.Viewport.Height / 2 - 25), .005f);
mMoonLocation = RotateAboutOrigin(mMoonLocation, mPlanetLocation, .005f);
The moon rotates strangely and oblong . Any help would be great!
I've been through this exact thing in my game!
I have a base class that I use for planets and other space objects. These contain simple properties like Position, Origin, Distance and `Angle.
The angle property uses a setter to change the position based on the desired angle, distance to sun/object and the position + origin of the center.
public float Angle
{
get { return angle; }
set
{
angle = value;
position = Rotate(MathHelper.ToRadians(angle), Distance, SunPosition + Origin);
}
}
public static Vector2 Rotate(float angle, float distance, Vector2 centrer)
{
return new Vector2((float)(distance * Math.Cos(angle)), (float)(distance * Math.Sin(angle))) + center;
}
Using this you could easily do something like SpawnPlanet(SunPosition, Distance) and update the angle by X amount each update. (planet.Angle += X)
Pretty much the same deal that you are doing, but see if this code matches up with your algorithm. For the strange moon shape, could you show some more code and show an example of the orbit?
I am developing in application in XNA which draws random paths. Unfortunately, I'm out of touch with graphing, so I'm a bit stuck. My application needs to do the following:
Pick a random angle from my origin (0,0), which is simple.
Draw a circle in relation to that origin, 16px away (or any distance I specify), at the angle found above.
(Excuse my horrible photoshoping)
alt text http://www.refuctored.com/coor.png
The second circle at (16,16) would represent a 45 degree angle 16 pixels away from my origin.
I would like to have a method in which I pass in my distance and angle that returns a point to graph at. i.e.
private Point GetCoordinate(float angle, int distance)
{
// Do something.
return new Point(x,y);
}
I know this is simple, but agian, I'm pretty out of touch with graphing. Any help?
Thanks,
George
If the angle is in degrees, first do:
angle *= Math.PI / 180;
Then:
return new Point(distance * Math.Cos(angle), distance * Math.Sin(angle));
By the way, the point at (16, 16) is not 16 pixels away from the origin, but sqrt(16^2 + 16^2) = sqrt(512) =~ 22.63 pixels.
private Point GetCoordinate(float angle, int distance)
{
float x = cos(angle) * distance;
float y = sin(angle) * distance;
return new Point(x, y);
}
Note that the trigonometric functions probably take radians. If your angle is in degrees, divide by 180/Pi.
in general:
x = d * cos(theta)
y = d * sin(theta)
Where d is the distance from the origin and theta is the angle.
Learn the Pythagorean Theorem. Then this thread should have more specific details for you.