I am writing a custom script for exporting Blender models and animation into a game that runs using OpenGL.
I am aware that Blender uses a Z-up system whereas OpenGL in my game uses a Y-up system. I can easily fix the model mesh and bone positions with a rotation around the X axis of -π/2, which renders the model correctly.
However in Blender a bone on my character's leg that is parallel with the floor has an euler X rotation of 0, and a bone that is perpendicular to the floor has an euler X rotation of -π/2 (or π/2).
In-game, a character's leg that is parallel to the floor has an euler X rotation of -π/2 (or π/2) and a bone that is perpendicular to the floor has a rotation of 0.
Here, the model's left leg is parallel with the floor with the same XYZ euler rotation as the left leg in Blender (which is perpendicular to the floor):
The blender export script for bone rotations at each frame is:
for f in range(scene.frame_start, scene.frame_end + 1):
bpy.context.scene.frame_set(f);
bpy.context.scene.update();
for i in range(len(arm.pose.bones)):
bone = arm.pose.bones[i]
out.write(str(boneIDs[bone.name]) + ","); # ID of the bone
out.write(VecToStringComma(bone.matrix.to_euler())) # bone rotation
out.write("\n")
And loading it in C# the bone rotation matrix is recomposed with the following code:
for (int i = 0; i <= frameCount; i++)
{
Frame frame = new Frame();
var parts = r[read].Split(',');
read++;
for (int p = 0; p < parts.Length - 1; p += 4)
{
FrameDeform fd = new FrameDeform();
fd.DeformerID = int.Parse(parts[p]);
Vector3 rot = new Vector3(double.Parse(parts[p + 1]), double.Parse(parts[p + 2]), double.Parse(parts[p + 3]));
fd.Rotation = Matrix4.CreateRotationX(rot.X) * Matrix4.CreateRotationY(rot.Y) * Matrix4.CreateRotationZ(rot.Z);
frame.FrameDeforms.Add(fd);
}
anim.Frames.Add(frame);
}
Frame 0 in Blender looks like this:
Frame 0 in-game looks like this:
I am aware that at the moment I am not applying any alterations to the rotation values, however I can see that the legs are following the right movement path in-game, but on the incorrect axis.
Rotating the rotation values by -π/2 around the X axis does not help, as the X values remain unchanged.
Any help would be incredibly appreciated as I have been struggling with this issue for quite a while.
Update
When loading the model I apply no rotations around the X axis. Instead, at run-time I apply a rotation of -π/2 around the X axis, which happens in the shader:
vec3 pos = (zMat * bones[id] * vec4(aPosition, 1.0)).xyz
However this gives me the following result (when only animating the torso):
Related
Example Image here
I am trying to find a way to calculate points on my cylinders top circle surface. My situation looks like this, I have a vector which is defining my cylinders direction in 3d room. Then I already calculated me a perpendicular vector with
Vector3.Cross(vector1, vector2)
Now I use the diameter/2 to calculate the point which is lying on the edge of the circular top surface of my cylinder. Now I want to rotate my vector always 90 degrees in order to get 4 points on the edge of the surface. All the 4 vectors defining them should be perpendicular to the cylinders direction. Can you help me how I can rotate the first perpendicular to achieve this?
I already tried:
Matrix4x4.CreateFromAxisAngle(vectorcylinderdirection, radiant)
Then I calculated again cross product but it doesnt work like I want to.
Edit:
public static void calculatePontsOnCylinder()
{
//Calculate Orthogonal Vector to Direction
Vector3 tCylinderDirection = new Vector3(1, 0, 0);
Vector3 tOrthogonal = Vector3.Cross(tCylinderDirection, new Vector3(-tCylinderDirection.Z,tCylinderDirection.X,tCylinderDirection.Y));
Vector3 tNormOrthogonal = Vector3.Normalize(tOrthogonal);
//Calculate point on surface circle of cylinder
//10mm radius
int tRadius = 10;
Vector3 tPointFinder = tNormOrthogonal * tRadius;
//tPointFinder add the cylinder start point
//not yet implemented
//now i need to rotate the vector always 90 degrees to find the 3 other points on the circular top surface of the cylinder
//don't know how to do this
// I thought this should do it
Matrix4x4.CreateFromAxisAngle(tCylinderDirection, (float)DegreeToRadian(90));
}
private static double DegreeToRadian(double angle)
{
return Math.PI * angle / 180.0;
}
In the picture you can see a example, the vector1 is what I need, always rotated 90 degrees and vector2 would be my cylinder direction vector
I possibly have found the correct formula:
Vector3 tFinal = Vector3.Multiply((float)Math.Cos(DegreeToRadian(90)), tPointFinder) + Vector3.Multiply((float)Math.Sin(DegreeToRadian(90)), Vector3.Cross(tCylinderDirection, tPointFinder));
Vector3 tFinal180 = Vector3.Multiply((float)Math.Cos(DegreeToRadian(180)), tPointFinder) + Vector3.Multiply((float)Math.Sin(DegreeToRadian(180)), Vector3.Cross(tCylinderDirection, tPointFinder));
Vector3 tFinal270= Vector3.Multiply((float)Math.Cos(DegreeToRadian(270)), tPointFinder) + Vector3.Multiply((float)Math.Sin(DegreeToRadian(270)), Vector3.Cross(tCylinderDirection, tPointFinder));
Interesting is that if I try it with (1,1,0) as cylinder direction it gives me correct directions but the length is different for 90 degrees and 270.
Here is the code that should solve your problem assuming that the input requirements are satisfied.
float zCutPlaneLocation = 20; // should not get bigger than cylinder length
float cylinderRadius = 100;
Vector3 cylinderCenter = new Vector3(0, 0, 0); // or whatever you got as cylinder center point, given as Vector3 since Point type is not defined
// will return 360 points on cylinder edge, corresponding to this z section (cut plane),
// another z section will give another 360 points and so on
List<Vector3> cylinderRotatedPointsIn3D = new List<Vector3>();
for (int angleToRotate = 0; angleToRotate < 360; angleToRotate++)
{
cylinderRotatedPointsIn3D.Add(GetRotatedPoint(zCutPlaneLocation, angleToRotate, cylinderRadius, cylinderCenter));
}
....
private static Vector3 GetRotatedPoint(
float zLocation, double rotationAngleInRadian, float cylinderRadius, Vector3 cylinderCenter)
{
Vector2 cylinderCenterInSection = new Vector2(cylinderCenter.X, cylinderCenter.Y);
float xOfRotatedPoint = cylinderRadius * (float)Math.Cos(rotationAngleInRadian);
float yOfRotatedPoint = cylinderRadius * (float)Math.Sin(rotationAngleInRadian);
Vector2 rotatedVector = new Vector2(xOfRotatedPoint, yOfRotatedPoint);
Vector2 rotatedSectionPointOnCylinder = rotatedVector + cylinderCenterInSection;
Vector3 rotatedPointOnCylinderIn3D = new Vector3(
rotatedSectionPointOnCylinder.X,
rotatedSectionPointOnCylinder.Y,
zLocation + cylinderCenter.Z);
return rotatedPointOnCylinderIn3D;
}
I just created a console app for this. First part of code should be added in main method.
Working with those matrices seems is not that easy. Also I am not sure if your solution works ok for any kind of angle.
Here the idea is that the rotated points from cylinder are calculated in a section of the cylinder so in 2D than the result is moved in 3D by just adding the z where the Z section was made on cylinder. I suppose that world axis and cylinder axis are on the same directions. Also if your cylinder gets along (increases) on the X axis, instead of Z axis as in example just switch in code the Z with X.
I attached also a picture for more details. This should work if you have the cylinder center, radius, rotation angle and you know the length of the cylinder so that you create valid Z sections on cylinder. This could get tricky for clockwise/counter clock wise cases but lets see how it works for you.
If you want to handle this with matrices or whatever else I think that you will end up having this kind of result. So I think that you cannot have "all" the rotated points in just a list for the entire cylinder surface, they would depend on something like the rotated points of a Z section on the cylinder.
So... I'll try to be as clear as possible, if I let something unclear please let me know.
I have a vector that comes from origin and go to a point in space, and I have an object that I want it's transform.up (Or Y vector) to be colinear with this vector, but the Y rotation of this object is driven by another factor, and I dont want to change it.
So far, what I'm trying to do is project this vector in the local XY and local ZY planes and measure the angles and apply rotation:
float xInclination = Mathf.Atan(Vector3.ProjectOnPlane(orbitMomemtumVector, transform.right).z / Vector3.ProjectOnPlane(orbitMomemtumVector, transform.right).y)*Mathf.Rad2Deg;
float yInclination = Mathf.Atan(initialPos.z / initialPos.x) * Mathf.Rad2Deg;
float zInclination = -Mathf.Atan(Vector3.ProjectOnPlane(orbitMomemtumVector, transform.forward).x / Vector3.ProjectOnPlane(orbitMomemtumVector, transform.forward).y)*Mathf.Rad2Deg;
if (initialPos.x < 0 && initialPos.z > 0)
{
yInclination = 180f - Mathf.Abs(yInclination);
argumentPeriapsis = argumentPeriapsis - yInclination;
}
else if (initialPos.x < 0 && initialPos.z < 0)
{
yInclination = 180f + Mathf.Abs(yInclination);
argumentPeriapsis = argumentPeriapsis - yInclination;
}
else
{
argumentPeriapsis = argumentPeriapsis - yInclination;
}
transform.rotation = Quaternion.Euler(xInclination, (float)argumentPeriapsis, zInclination);
This image shows the problem, I need the Y arrow to be collinear with the blue line
Let me be clear on this, don't use Euler angles in 3d space. In fact, avoid them in 2d games as well. Unless your object truly rotates on a single axis, and you never have to get the angle between two rotations, or lerp your rotations, don't use them.
What you want is Quaternion.LookRotation(A, B).
A being a vector to which Z will be colinear, X being orthogonal to the plane defined by A and B, and Y belonging to that plane.
Followup:
To match other axis to A, there are multiple solutions. First would be to simply apply the lookRotation to a parent object, while the child object is rotated inside to match whatever rotation you want. You can also edit your entire mesh to do so.
The other way is to simply apply another rotation, so that both combined get your desired result, like so:
Quaternion zMatched = Quaternion.LookRotation(zAxisTarget, direction)
Quaternion yMatched = zMatched * Quaternion.AngleAxis(90f, Vector3.right);
transform.rotation = yMatched;
This will rotate the object so that the y axis becomes collinear to the previous z axis.
This is however nor perfect. If you reach this point, you should consider building your own clearer solution based on combining AngleAxis results. But it works well enough.
I have a player position, a pointer indicating the players view direction, a distance and a horizontal and vertical angle. I want to calculate a target position:
that is distance away from the players position
that, from the players view direction, is horizontal angle to
the right and vertical angle up
It's about positioning a Hololens-Application UI in a sphere around the player. The UI should i.e. be 40 degrees to the leftand 20 degrees up from the players view direction.
Edit: Added image to clarify. Given is the Player Pos (pX|pY|pZ), the radius (= length of the black bold line) and both angles in degree.
I'm looking for how to calculate the UI Center position (x?|y?|z?).
You can use Quaternion.Euler to create a rotation based on angles in world space and then get the desired result by multiplying it with a known position.
So by using your example you could find the position like this:
float radius, x_rot, y_rot;
Vector3 forwardDirection, playerPos;
Vector3 forwardPosition = playerPos + (forwardDirection * radius);
Vector3 targetPosition = Quaternion.Euler(x_rot, y_rot, 0) * forwardPosition;
Try check out the docs on Quaternion and Quaternion.AngleAxis for more handy rotation stuff.
Answer by a mathematician:
To calculate the spherical position with the given information (distance between objects, x angle, y angle) you use trigonometry:
float x = distance * Mathf.Cos(yAngle) * Mathf.Sin(xAngle);
float z = distance * Mathf.Cos(yAngle) * Mathf.Cos(xAngle);
float y = distance * Mathf.Sin(yAngle);
ui.transform.position = player.transform.position + new Vector3(x,y,z);
// Set UI in front of player with the same orientation as the player
ui.transform.position = player.transform.position + player.transform.forward * desiredDistance;
ui.transform.rotation = player.transform.rotation;
// turn it to the left on the players up vector around the the player
ui.transform.RotateAround(player.transform.position, player.transform.up, -40);
// Turn it up on the UI's right vector around the player
ui.transform.RotateAround(player.transform.position, ui.transform.right, 20);
assuming you also want the UI to face the player, otherwise you have to set another rotation after this.
No need to calculate it yourself, the Unity API already does it for you (
see Rotate around)
If i am understanding you correctly you want to create a UI that hovers above a point. I recently did a similar thing in my game. and this is how i did it.
if (Input.GetMouseButtonDown(0)) // use the ray cast to get a vector3 of the location your ui
// you could also do this manualy of have the computer do it the main thing is to
// get the location in the world where you want your ui to be and the
// WorldTOScreenPoint() will do the rest
{
RaycastHit hit;
Vector3 pos;
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(ray, out hit))
{
pos = hit.point;
pos.y += yOffset; // use the offset if you want to have it hover above the point
ui.transform.position = cam.WorldToScreenPoint(pos); // use your main cammera here
// then either make your ui vissible or instanciati it here and make sure if you instanciate it
// that you make it a child of your cnavas
}
}
I hope this solves you problem. If i am not understanding what you are trying to do let me know and i will try to help.
Note: if you want to make the ui look farther away when you move away from the point scale the ui down as you move farther away, and scale it up when you get closer.
The diagram in the question is somewhat confusing:
The axes are in the orientation of a right-handed coordinate system, but Unity uses a left-handed coordinate system.
In terms of Euler angles, the part of the image labeled "x Angle" is actually the Y angle (rotation around Y axis), and the part of the image labeled "y Angle" is actually the X angle (around X axis).
The two angles listed use a different sign convention. The Y angle (labeled "x Angle") is following the right-hand rule, while the other angle is not.
Jonas Zimmer has a great answer that follows the conventions in the image, but I'll try to do something a bit less confusing and follows more standard math conventions.
Here is some code for Unity written in C#, in YX rotation order, treating zero angle as forward (+Z), and follows Unity's conventions of a left-handed, Y-is-up, Z-is-forward coordinate system. Increasing Y angle rotates to the right, and increasing X angle rotates down.
public static Vector3 Vector3FromAngleYX(float y, float x)
{
float cosx = Mathf.Cos(x);
return new Vector3(cosx * Mathf.Sin(y), -Mathf.Sin(x), cosx * Mathf.Cos(y));
}
Also, I found this question looking to implement a Godot version, so here is a version for Godot Engine written in GDScript, in YX rotation order, treating zero angle as forward (-Z), and follows Godot's conventions of a right-handed, Y-is-up, Z-is-back coordinate system. Increasing Y angle rotates to the left, and increasing X angle rotates up.
func vector3_from_angle_yx(y, x):
var neg_cosx = -cos(x)
return Vector3(neg_cosx * sin(y), sin(x), neg_cosx * cos(y))
I am working on a project which has following goal:
Load rigged 3D mesh (e.g. a human skeleton) with Assimp.NET
Manipulate bones of mesh so it fits your own body (with Microsoft Kinect v2)
Perform vertex skinning
Loading the rigged mesh and extracting bone information works (hopefully) without any problems (based on this tutorial: http://www.richardssoftware.net/2013/10/skinned-models-in-directx-11-with.html). Each bone (class "ModelBone") consists of following information:
Assimp.Matrix4x4 LocalTransform
Assimp.Matrix4x4 GlobalTransform
Assimp.Matrix4x4 Offset
LocalTransform is directly extracted from assimp node (node.Transform).
GlobalTransform includes own LocalTransform and all parent's LocalTransform (see code snipped calculateGlobalTransformation()).
Offset is directly extracted from assimp bone (bone.OffsetMatrix).
At the moment I don't have GPU vertex skinning implemented, but I iterate over each vertex and manipulate it's position and normal vector.
foreach (Vertex vertex in this.Vertices)
{
Vector3D newPosition = new Vector3D();
Vector3D newNormal = new Vector3D();
for (int i=0; i < vertex.boneIndices.Length; i++)
{
int boneIndex = vertex.boneIndices[i];
float boneWeight = vertex.boneWeights[i];
ModelBone bone = this.BoneHierarchy.Bones[boneIndex];
Matrix4x4 finalTransform = bone.GlobalTransform * bone.Offset;
// Calculate new vertex position and normal
newPosition += boneWeight * (finalTransform * vertex.originalPosition);
newNormal += boneWeight * (finalTransform * vertex.originalNormal);
}
// Apply new vertex position and normal
vertex.position = newPosition;
vertex.normal = newNormal;
}
Like I already said, I want to manipulate bones with a Kinect v2 sensor, so I won't have to use animations (e.g. interpolating keyframes, ...)! But for the beginning I want to be able to manipulate bones manually (e.g. rotate torso of mesh by 90 degrees). Therefore I create a 4x4 rotation matrix (90 degrees around x-axis) by calling Assimp.Matrix4x4.FromRotationX(1.5708f);. Then I replace the bone's LocalTransform with this rotation matrix:
Assimp.Matrix4x4 rotation = Assimp.Matrix4x4.FromRotationX(1.5708f);
bone.LocalTransform = rotation;
UpdateTransformations(bone);
After the bone manipulation I use following code to calculate the new GlobalTransform of the bone and it's child bones:
public void UpdateTransformations(ModelBone bone)
{
this.calculateGlobalTransformation(bone);
foreach (var child in bone.Children)
{
UpdateTransformations(child);
}
}
private void calculateGlobalTransformation(ModelBone bone)
{
// Global transformation includes own local transformation ...
bone.GlobalTransform = bone.LocalTransform;
ModelBone parent = bone.Parent;
while (parent != null)
{
// ... and all local transformations of the parent bones (recursively)
bone.GlobalTransform = parent.LocalTransform * bone.GlobalTransform;
parent = parent.Parent;
}
}
This approach results in this image. The transformation seems to be applied correctly to all child bones, but the manipulated bone rotates around the world space origin and not around its own local space :( I already tried to include the GlobalTransform translation (last row of GlobalTransform) into the rotation matrix before set it as the LocalTransform, but without success...
I hope somebody can help me with this problem!
Thanks in advance!
Finally I found the solution :) All calculations were correct except:
Matrix4x4 finalTransform = bone.GlobalTransform * bone.Offset;
The correct calculation for me is:
Matrix4x4 finalTransform = bone.GlobalTransform * bone.Offset;
finalTransform.transpose();
So it seems to be a row-major / column-major problem. My final CPU vertex skinning code is:
public void PerformSmoothVertexSkinning()
{
// Precompute final transformation matrix for each bone
List<Matrix4x4> FinalTransforms = new List<Matrix4x4>();
foreach (ModelBone bone in this.BoneHierarchy.Bones)
{
// Multiplying a vector (e.g. vertex position/normal) by finalTransform will (from right to left):
// 1. transform the vector from mesh space to bone space (by bone.Offset)
// 2. transform the vector from bone space to world space (by bone.GlobalTransform)
Matrix4x4 finalTransform = bone.GlobalTransform * bone.Offset;
finalTransform.Transpose();
FinalTransforms.Add(finalTransform);
}
foreach (Submesh submesh in this.Submeshes)
{
foreach (Vertex vertex in submesh.Vertices)
{
Vector3D newPosition = new Vector3D();
Vector3D newNormal = new Vector3D();
for (int i = 0; i < vertex.BoneIndices.Length; i++)
{
int boneIndex = vertex.BoneIndices[i];
float boneWeight = vertex.BoneWeights[i];
// Get final transformation matrix to transform each vertex position
Matrix4x4 finalVertexTransform = FinalTransforms[boneIndex];
// Get final transformation matrix to transform each vertex normal (has to be inverted and transposed!)
Matrix4x4 finalNormalTransform = FinalTransforms[boneIndex];
finalNormalTransform.Inverse();
finalNormalTransform.Transpose();
// Calculate new vertex position and normal (average of influencing bones)
// Formula: newPosition += boneWeight * (finalVertexTransform * vertex.OriginalPosition);
// += boneWeight * (bone.GlobalTransform * bone.Offset * vertex.OriginalPosition);
// From right to left:
// 1. Transform vertex position from mesh space to bone space (by bone.Offset)
// 2. Transform vertex position from bone space to world space (by bone.GlobalTransform)
// 3. Apply bone weight
newPosition += boneWeight * (finalVertexTransform * vertex.OriginalPosition);
newNormal += boneWeight * (finalNormalTransform * vertex.OriginalNormal);
}
// Apply new vertex position and normal
vertex.Position = newPosition;
vertex.Normal = newNormal;
}
}
}
Hopefully this thread is helpful for other people. Thanks for your help Sergey!
To transform a bone you should use it's offset matrix:
http://assimp.sourceforge.net/lib_html/structai_bone.html#a9ae5293b5c937436e4b338e20221cc2e
Offset matrix transforms from global space to bone space. If you want to rotate a bone around it's origin you should:
transform to bone space
apply rotation
transform to global space
So bone's global transform may be calculated like this:
bonesGlobalTransform = parentGlobalTransform *
bone.offset.inverse() *
boneLocalTransform *
bone.offset;
So:
transform to bone space with offset matrix
apply local transform
transform to global space with offset.inverse() matrix
I'm making a 2D platformer that features a dynamic camera. The camera must track 4 players at once so that they're all on the screen. In addition the camera must not move beyond a predefined rectangle boundary. I've tried implementing it but I just can't seem to get the process of zooming the camera so that it's always close as possible to the four objects.
The general algorithm I have so far is
1. Define the viewing space by calculating a 2D axis aligned bounding box using the 4 object positions being tracked and use its center as a camera postion (or averaging)
2. Calculate an orthographic size by using the largest x OR y value using a vector from the camera's position to each object being tracked.
If the camera is beyond the camera's boundary calculate the excess amount and displace in the opposite direction.
This seems simple enough on paper but I can't seem to get a correct working implementation.
Why dont you just take the Average of the 4 players Position and use it as Camera Position, also check if the players are out of boundary and when they are, zoom out.
float x = 0;
float y = 0;
GameObject[] players = new GameObjects[5];
foreach(GameObject _ply in players)
{
x += _ply.transform.position.x;
y += _ply.transform.position.y;
}
x = x/players.Length;
y = y/players.Length;
foreach(GameObject _ply in players)
{
if(_ply.transform.position.x > (x + (Screen.Width / 2)))
//zoom out
if(_ply.transform.position.y > (y + (Screen.Height / 2)))
//zoom out
}
But you have to fix Zoomin.