How can I calculate an average terrain slope based on four points? - c#

I'm programming a simple physics system for vehicles in Unity and I'm trying to figure out how to calculate the average slope based on four points in order to apply the gravitational acceleration. I can get the terrain height at the vehicle wheels positions like this:
Terrain.activeTerrain.SampleHeight(wheel.position);
How can I calculate the average terrain slope based on those four points? I thought about calculating direction vectors between different points, but I'm not sure how to combine them into a single slope vector.

(Assuming that all 4 wheels are always coplanar) I think you could take any 3 wheels, take the height sample
var wheels = new Vector3[3]{ wheel1.transform.position, wheel2.transform.position, wheel3.transform.position };
for(var i = 0; i < wheels.Length; i++)
{
var w = wheels[i];
w.y = Terrain.activeTerrain.SampleHeight(w)
wheels[i]= w;
}
and from that resulting 3 points generate a plane
var plane = new Plane(wheels[0], wheels[1], wheels[2]);
Now you can take any of these points and take a vector in straight up direction.
var upPoint = wheels[0] + Vector3.up;
Then you can map that vector onto the plane using closest point on plane.
var planePoint = plane.ClosestPointOnPlane(upPoint);
As a last step normalize the vector between that mapped point and your start wheel point and you should have the direction of the slope + the Y component tells you how steep it is
var planeVector = wheels[0] - planePoint;
var direction = planeVector.normalized;
Debug.Log($"Slope direction is {direction} with a steepness factor of {direction.y} or also {-planeVector.magnitude}");
where both planeVector.magnitude and direction.y should be moving between 0 (completely horizontal surface) and -1 (completely vertical surface)
Something like that I guess ^^

Wouldn't that be the average slope between the middle point of the back wheels and the middle point of the front wheels? You get the tangent of the angle by dividing the difference between those heights and their distance (as projected). Guess that's too simple?
Assuming 4 points (x1,y1), (x2,y2), (x3,y3) and (x4,y4). The middlepoints would be xB = (x1+x2)/2, yB=(y1+y2)/2 and xF=(x3+x4)/2 and yF=(y3+y4)/2. Then Tan A = (yF-yB)/(xF-xB)

Related

Determining the grids cut by a plane,formed from 3 vertices

I found out an equation of a plane,from three vertices.
Now,if I have a bounding box(i.e. a large cube),How can I determine the grid positions(small cubes),where the plane cuts the large cube.
I am currently following this approach:
For each small cube center, say(Xp, Yp, Zp), calculate perpendicular distance to the plane i.e., (aXp + bYp + c*Zp + d)/ (SquareRoot Of (a^2 + b^2 + c^2)). This should be less than or equal to (length of smallCube * SquareRoot(3))/2.
If this criteria,gets satisfied,then I assume my plane to cut the large cube at this small cube position.
a,b,c,d are coefficients of the plane,of the form ax+by+cz+d = 0.
I would be really glad,if someone can let me know,if I am doing something wrong (or) also,any other simple approach.
Seems you want to get a list of small cubes (grid voxels) intersected by given plane.
The simplest approach:
Find intersection of the plane with any cube edge. For example, intersection with vertical edge of AAB (X0,Z0 are constant) might be calculated by solving this equation for unknown Y:
aX0 + bY + c*Z0 + d = 0
and checking that Y is in cube range. Get small cube coordinates (0, ky=Floor(Y/VoxelSize), 0) and then check neighbor voxels in order (account for plane coefficients to check only real candidates).
candidates:
0,ky,0
1,ky,0
0,ky-1,0
0,ky+1,0
0,ky,1
There are more advanced methods to generate voxel sequence for ray case (both 2d and 3d) like Amanatides/Woo algorithm. Perhaps something similar exists also for plane voxelization
Here is AABB-plane intersection test code from this page (contains some explanations)
// Test if AABB b intersects plane p
int TestAABBPlane(AABB b, Plane p) {
// Convert AABB to center-extents representation
Point c = (b.max + b.min) * 0.5f; // Compute AABB center
Point e = b.max - c; // Compute positive extents
// Compute the projection interval radius of b onto L(t) = b.c + t * p.n
float r = e[0]*Abs(p.n[0]) + e[1]*Abs(p.n[1]) + e[2]*Abs(p.n[2]);
// Compute distance of box center from plane
float s = Dot(p.n, c) - p.d;
// Intersection occurs when distance s falls within [-r,+r] interval
return Abs(s) <= r;
}
Note that e and r remain the same for all cubes, so calculate them once and use later.

Kinect 2 Inconsistent FloorClipPlane reading

I have successfully implemented the floor clip plane to measure the distance of left foot to the floor, which is fairly accurate. The problem I have is that as I move away from the camera (i.e. left foot Z axis is increased), the foot distance to the floor changes (increases).
Note: The floor itself is not tilted nor the Kinect stand.
I tested it with Kinect 1 and had the same result. The subject's head height (Y axis) also changes value as I move away or get closer to the camera. It does not matter of the camera is tilted or line of sight. the D value in the FloorClipPlane equation shows a constant number during the test.
A = bodyFrame.FloorClipPlane.X;
B = bodyFrame.FloorClipPlane.Y;
C = bodyFrame.FloorClipPlane.Z;
D = bodyFrame.FloorClipPlane.W;
distanceLeftFoot = A * leftFootPosX + B * leftFootPosY + C * leftFootPosZ + D;
Just to let you know, I have coordinate mapping between depth and colour. Not sure if that has anything to do with the issue.
The FloorClipPlane is expressed in hessian normal form - as explained in the docs. Specifically, your A, B, and C values compromise the unit vector from camera origin (center of the Kinect) to floor plane such that it produces a perpendicular intersection with the floor plane. D is the magnitude of that vector (distance from camera origin to floor plane).
Even if you think the floor is flat and the Kinect is parallel to the ground, you have a perspective warping problem which means the body location (measured in depth space) is going to change as you come closer and further.
To fix this you need to provide as input both your 3D coordinate values and the floor plane, which will then give you back what you want, a measured distance from floor plane to joint:
// j is your joint - left foot or any other joint
float x = j.Position.X;
float y = j.Position.Y;
float z = j.Position.Z;
float distance = (Math.Abs((x * floorPlane.X) + (y * floorPlane.Y) + (z * floorPlane.Z) + floorPlane.W))/((float)Math.Sqrt((Math.Pow(floorPlane.X,2)) + (Math.Pow(floorPlane.Y, 2)) + (Math.Pow(floorPlane.Z, 2))));
I hope this helps you. Can't elaborate further what influence your mapping from depth to color might be doing here without seeing what you are specifically doing

Contrain camera to rectangle while tracking multiple objects

I'm making a 2D platformer that features a dynamic camera. The camera must track 4 players at once so that they're all on the screen. In addition the camera must not move beyond a predefined rectangle boundary. I've tried implementing it but I just can't seem to get the process of zooming the camera so that it's always close as possible to the four objects.
The general algorithm I have so far is
1. Define the viewing space by calculating a 2D axis aligned bounding box using the 4 object positions being tracked and use its center as a camera postion (or averaging)
2. Calculate an orthographic size by using the largest x OR y value using a vector from the camera's position to each object being tracked.
If the camera is beyond the camera's boundary calculate the excess amount and displace in the opposite direction.
This seems simple enough on paper but I can't seem to get a correct working implementation.
Why dont you just take the Average of the 4 players Position and use it as Camera Position, also check if the players are out of boundary and when they are, zoom out.
float x = 0;
float y = 0;
GameObject[] players = new GameObjects[5];
foreach(GameObject _ply in players)
{
x += _ply.transform.position.x;
y += _ply.transform.position.y;
}
x = x/players.Length;
y = y/players.Length;
foreach(GameObject _ply in players)
{
if(_ply.transform.position.x > (x + (Screen.Width / 2)))
//zoom out
if(_ply.transform.position.y > (y + (Screen.Height / 2)))
//zoom out
}
But you have to fix Zoomin.

Trying to accurately measure 3D distance from a 2D image

I am trying to extract out 3D distance in mm between two known points in a 2D image. I am using square AR markers in order to get the camera coordinates relative to the markers in the scene. The points are the corners of these markers.
An example is shown below:
The code is written in C# and I am using XNA. I am using AForge.net for the CoPlanar POSIT
The steps I take in order to work out the distance:
1. Mark corners on screen. Corners are represented in 2D vector form, Image centre is (0,0). Up is positive in the Y direction, right is positive in the X direction.
2. Use AForge.net Co-Planar POSIT algorithm to get pose of each marker:
float focalLength = 640; //Needed for POSIT
float halfCornerSize = 50; //Represents 1/2 an edge i.e. 50mm
AVector[] modelPoints = new AVector3[]
{
new AVector3( -halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, -halfCornerSize ),
new AVector3( -halfCornerSize, 0, -halfCornerSize ),
};
CoplanarPosit coPosit = new CoplanarPosit(modelPoints, focalLength);
coPosit.EstimatePose(cornersToEstimate, out marker1Rot, out marker1Trans);
3. Convert to XNA rotation/translation matrix (AForge uses OpenGL matrix form):
float yaw, pitch, roll;
marker1Rot.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix xnaRot = Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix xnaTranslation = Matrix.CreateTranslation(marker1Trans.X, marker1Trans.Y, -marker1Trans.Z);
Matrix transform = xnaRot * xnaTranslation;
4. Find 3D coordinates of the corners:
//Model corner points
cornerModel = new Vector3[]
{
new Vector3(halfCornerSize,0,-halfCornerSize),
new Vector3(-halfCornerSize,0,-halfCornerSize),
new Vector3(halfCornerSize,0,halfCornerSize),
new Vector3(-halfCornerSize,0,halfCornerSize)
};
Matrix markerTransform = Matrix.CreateTranslation(cornerModel[i].X, cornerModel[i].Y, cornerModel[i].Z);
cornerPositions3d1[i] = (markerTransform * transform).Translation;
//DEBUG: project corner onto screen - represented by brown dots
Vector3 t3 = viewPort.Project(markerTransform.Translation, projectionMatrix, viewMatrix, transform);
cornersProjected1[i].X = t3.X; cornersProjected1[i].Y = t3.Y;
5. Look at the 3D distance between two corners on a marker, this represents 100mm. Find the scaling factor needed to convert this 3D distance to 100mm. (I actually get the average scaling factor):
for (int i = 0; i < 4; i++)
{
//Distance scale;
distanceScale1 += (halfCornerSize * 2) / Vector3.Distance(cornerPositions3d1[i], cornerPositions3d1[(i + 1) % 4]);
}
distanceScale1 /= 4;
6. Finally I find the 3D distance between related corners and multiply by the scaling factor to get distance in mm:
for(int i = 0; i < 4; i++)
{
distance[i] = Vector3.Distance(cornerPositions3d1[i], cornerPositions3d2[i]) * scalingFactor;
}
The distances acquired are never truly correct. I used the cutting board as it allowed me easy calculation of what the distances should be. The above image calculated a distance of 147mm (expected 150mm) for corner 1 (red to purple). The image below shows 188mm (expected 200mm).
What is also worrying is the fact that when measuring the distance between marker corners sharing an edge on the same marker, the 3D distances obtained are never the same. Another thing I noticed is that the brown dots never seem to exactly match up with the colored dots. The colored dots are the coordinates used as input to the CoPlanar posit. The brown dots are the calculated positions from the center of the marker calculated via POSIT.
Does anyone have any idea what might be wrong here? I am pulling out my hair trying to figure it out. The code should be quite simple, I don't think I have made any obvious mistakes with the code. I am not great at maths so please point out where my basic maths might be wrong as well...
You are using way to many black boxes in your question. What is the focal length in the second step? Why go through ypr in step 3? How do you calibrate? I recommend to start over from scratch without using libraries that you do not understand.
Step 1: Create a camera model. Understand the errors, build a projection. If needed apply a 2d filter for lens distortion. This might be hard.
Step 2: Find you markers in 2d, after removing lens distortion. Make sure you know the error and that you get the center. Maybe over multiple frames.
Step 3: Un-project to 3d. After 1 and 2 this should be easy.
Step 4: ???
Step 5: Profit! (Measure distance in 3d and know your error)
I think you need to have 3D photo (two photo from a set of distance) so you can get the parallax distance from image differences

C# code snippet calculating the surface and vertex normals

I need a C# code snippet calculating the surface and vertex normals. Kind of surface is triangulated 3D closed mesh. The required code snippet must be able to use a vertex set and triangleindices. These are ready to use at the moment. The surface of 3D mesh object is not smooth, so it needs to be smoothed.
Could you help me.
It sounds like you're trying to display your 3D mesh and apply a smooth shading appearance by interpolating surface normals, such as in Phong shading, and you need to calculate the normals first. This is different from smoothing the surface of the mesh itself, since that implies altering the positions of its vertices.
Surface normals can be calculated by getting the vector cross product of two edges of a triangle.
As far as code, I'm unaware of any C# examples, but here is one in C++ that should be easy to port. It is taken from the popular NeHe tutorials for OpenGL:
void calcNormal(float v[3][3], float out[3]) // Calculates Normal For A Quad Using 3 Points
{
float v1[3],v2[3]; // Vector 1 (x,y,z) & Vector 2 (x,y,z)
static const int x = 0; // Define X Coord
static const int y = 1; // Define Y Coord
static const int z = 2; // Define Z Coord
// Finds The Vector Between 2 Points By Subtracting
// The x,y,z Coordinates From One Point To Another.
// Calculate The Vector From Point 1 To Point 0
v1[x] = v[0][x] - v[1][x]; // Vector 1.x=Vertex[0].x-Vertex[1].x
v1[y] = v[0][y] - v[1][y]; // Vector 1.y=Vertex[0].y-Vertex[1].y
v1[z] = v[0][z] - v[1][z]; // Vector 1.z=Vertex[0].y-Vertex[1].z
// Calculate The Vector From Point 2 To Point 1
v2[x] = v[1][x] - v[2][x]; // Vector 2.x=Vertex[0].x-Vertex[1].x
v2[y] = v[1][y] - v[2][y]; // Vector 2.y=Vertex[0].y-Vertex[1].y
v2[z] = v[1][z] - v[2][z]; // Vector 2.z=Vertex[0].z-Vertex[1].z
// Compute The Cross Product To Give Us A Surface Normal
out[x] = v1[y]*v2[z] - v1[z]*v2[y]; // Cross Product For Y - Z
out[y] = v1[z]*v2[x] - v1[x]*v2[z]; // Cross Product For X - Z
out[z] = v1[x]*v2[y] - v1[y]*v2[x]; // Cross Product For X - Y
ReduceToUnit(out); // Normalize The Vectors
}
The normalization function ReduceToUnit() can be found there as well.
Note that this calculates a surface normal for a single triangle. Since you give no information about how your vertices and indices are stored, I will leave it up to you to derive the set of triangles you need to pass to this function.
EDIT: As an additional note, I think the "winding direction" of your triangles is significant. Winding in the wrong direction will cause the normal to point in the opposite direction as well.

Categories