A few days into 3d models and I can't figure out how to move an axis so I can start my rotations. for example.... I want to tilt X at 30 degrees and then I will be lined up to do each axis rotations. If I rotate X to 30 degrees it throws off Y and Z rotations. Any examples of how to do this in code behind? The closest example I found was in the Manipulator demo but it doesn't show the code on how the manipulator works.
private Transform3DGroup GetTransforms(Model3D model)
{
var transforms = new Transform3DGroup();
// Rotation around X
transforms.Children.Add(new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(1, 0, 0), 0)));
// Rotation around Y
transforms.Children.Add(new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0)));
// Rotation around Z
transforms.Children.Add(new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 0, 1), 0)));
// Translate transform (if required)
transforms.Children.Add(new TranslateTransform3D());
model.Transform = transforms;
return transforms;
}
private void SetRotation(double amountX, double amountY, double amountZ, Model3D model, Point3D center)
{
// Suppose we have a function that gives us all the transforms
// applied to this object
var transforms = GetTransforms(model);
var translation = transforms.Children[3];
// Suppose GetCenter() obtains the center point of an object
// in Point3D format
var translatedCenter = translation.Transform(center);
if (!(transforms.Children[0] is RotateTransform3D rotX)) throw new ArgumentNullException(nameof(rotX));
if (!(transforms.Children[1] is RotateTransform3D rotY)) throw new ArgumentNullException(nameof(rotY));
if (!(transforms.Children[2] is RotateTransform3D rotZ)) throw new ArgumentNullException(nameof(rotZ));
// Set the center point of transformation to the translated center of the object
rotX.CenterX = rotY.CenterX = rotZ.CenterX = translatedCenter.X;
rotX.CenterY = rotY.CenterY = rotZ.CenterY = translatedCenter.Y;
rotX.CenterZ = rotY.CenterZ = rotZ.CenterZ = translatedCenter.Z;
// Apply the angle amounts
((AxisAngleRotation3D) rotX.Rotation).Angle = amountX;
((AxisAngleRotation3D) rotY.Rotation).Angle = amountY;
((AxisAngleRotation3D) rotZ.Rotation).Angle = amountZ;
}
Related
I have been working on a 2D physics engine using polygons.
And i am having trouble implementing the actual physics part. For a bit of background, i am not experienced at all when it comes to physics and therefor even if a found how to do the entire physics thing online, i would not be able to implement it into my project.
My goal is:
To have polygons fall with gravity.
Have weight drag etc.
Collision between multiple polygons.
What i have already made:
A way of displaying and creating multiple polygons.
Moving and rotating specified object(polygon).
Coeffients for drag, gravity and weight.
Hit boxes and visual boxes. (Visual boxes are what gets displayed and hit boxes are for physics)
A center point for every object. (So far is used for rotation)
A tick for when everything gets calculated. (Gametick/tickrate or whatever you wanna call it)
What i was not able to add / looking for:
Actual gravity.
Collision detection
Velocity for each object.
Collision between object.
Code snippets / how stuff works so far:
Beware that my code is janky and could be made better or more efficient.
Efficiency is not what im looking for!
Function for creating object:
public Object CreateNew(PointF[] hb, PointF[] vb, float rt, Color cl, bool gr, PointF ps)
{
Object obj = new Object
{
pos = ps,
rotation = rt,
offsets = vb,
hitBox = hb,
visBox = vb,
gravity = gr,
clr = cl,
};
#region center
List<Vector2> v2Points = new List<Vector2>();
foreach (PointF p in obj.offsets)
{
v2Points.Add(new Vector2(p.X, p.Y));
}
PointF point = ToPoint(Centroid(v2Points));
obj.center = new PointF(point.X, point.Y);
#endregion
return obj;
}
Function for changing position of object:
public Object ChangePosition(PointF pos, double rot, Object obj)
{
//////////////
int i = 0;
foreach (PointF p in obj.visBox)
{
float minPosX = (float)Math.Sqrt((Math.Pow(obj.center.X - pos.X, 2) + Math.Pow(0 - 0, 2)));
float minPosY = (float)Math.Sqrt((Math.Pow(obj.center.Y - pos.Y, 2) + Math.Pow(0 - 0, 2)));
obj.visBox[i] = new PointF(obj.offsets[i].X + pos.X, obj.offsets[i].Y + pos.Y);
i++;
}
i = 0;
foreach (PointF p in obj.hitBox)
{
float minPosX = (float)Math.Sqrt((Math.Pow(obj.center.X - pos.X, 2) + Math.Pow(0 - 0, 2)));
float minPosY = (float)Math.Sqrt((Math.Pow(obj.center.Y - pos.Y, 2) + Math.Pow(0 - 0, 2)));
obj.hitBox[i] = new PointF(obj.offsets[i].X + pos.X, obj.offsets[i].Y + pos.Y);
i++;
}
obj.pos = pos;
List<Vector2> v2Points = new List<Vector2>();
foreach (PointF p in obj.offsets)
{
v2Points.Add(new Vector2(p.X, p.Y));
}
PointF point = ToPoint(Centroid(v2Points));
obj.center = point;
List<Vector2> v2Points2 = new List<Vector2>();
foreach (PointF p in obj.hitBox)
{
v2Points2.Add(new Vector2(p.X, p.Y));
}
PointF point2 = ToPoint(Centroid(v2Points2));
obj.centerHitBox = point2;
obj.hitBox = RotatePolygon(obj.hitBox, obj.center, rotation * -1);
obj.visBox = RotatePolygon(obj.visBox, obj.center, rotation * -1);
obj.offsets = RotatePolygon(obj.offsets, obj.center, rotation * -1);
obj.hitBox = RotatePolygon(obj.hitBox, obj.center, rot);
obj.visBox = RotatePolygon(obj.visBox, obj.center, rot);
obj.offsets = RotatePolygon(obj.offsets, obj.center, rot);
rotation = rot;
return obj;
}
Pastebin link to object script:
https://pastebin.com/9SnG4vyj
I will provide more information or scripts if anybody needs it!
I'm new to 3D programming and am having a terrible time getting my texture to fill my meshes properly. I've got it sizing correctly on the walls but the texture on the roof is running on an angle and is stretched out too far.
I have several methods to create the mesh but they are all eventually sent to AddTriangle method, where the TextureCoordinates are set.
public static void AddTriangle(this MeshGeometry3D mesh, Point3D[] pts)
{
// Create the points.
int index = mesh.Positions.Count;
foreach (Point3D pt in pts)
{
mesh.Positions.Add(pt);
mesh.TriangleIndices.Add(index++);
mesh.TextureCoordinates.Add(new Point(pt.X + pt.Z, 0 - pt.Y));
}
}
Here is how my material is set up.
imageBrush.ImageSource = new BitmapImage(new Uri("pack://application:,,,/Textures/shingles1.jpg"));
imageBrush.TileMode = TileMode.Tile;
imageBrush.ViewportUnits = BrushMappingMode.Absolute;
imageBrush.Viewport = new Rect(0, 0, 25, 25);
SidingColor = new DiffuseMaterial(imageBrush);
SidingColor.Color = RGB(89, 94, 100);
My texture looks like this:
And here is the results I'm getting.
That's as close as I could get after hours of fooling around and googling.
Whew that was a little more difficult than I anticipated.
Here are few resources that helped me find a solution.
How to convert a 3D point on a plane to UV coordinates?
From the link below I realized the above formula above formula was correct but for a right hand coordinate system. I converted it and that was the final step.
http://www.math.tau.ac.il/~dcor/Graphics/cg-slides/geom3d.pdf
Here is the code that works in case someone else has this question.
public static void AddTriangle(this MeshGeometry3D mesh, Point3D[] pts)
{
if (pts.Count() != 3) return;
//use the three point of the triangle to calculate the normal (angle of the surface)
Vector3D normal = CalculateNormal(pts[0], pts[1], pts[2]);
normal.Normalize();
//calculate the uv products
Vector3D u;
if (normal.X == 0 && normal.Z == 0) u = new Vector3D(normal.Y, -normal.X, 0);
else u = new Vector3D(normal.X, -normal.Z, 0);
u.Normalize();
Vector3D n = new Vector3D(normal.Z, normal.X, normal.Y);
Vector3D v = Vector3D.CrossProduct(n, u);
int index = mesh.Positions.Count;
foreach (Point3D pt in pts)
{
//add the points to create the triangle
mesh.Positions.Add(pt);
mesh.TriangleIndices.Add(index++);
//apply the uv texture positions
double u_coor = Vector3D.DotProduct(u, new Vector3D(pt.Z,pt.X,pt.Y));
double v_coor = Vector3D.DotProduct(v, new Vector3D(pt.Z, pt.X, pt.Y));
mesh.TextureCoordinates.Add(new Point(u_coor, v_coor));
}
}
private static Vector3D CalculateNormal(Point3D firstPoint, Point3D secondPoint, Point3D thirdPoint)
{
var u = new Point3D(firstPoint.X - secondPoint.X,
firstPoint.Y - secondPoint.Y,
firstPoint.Z - secondPoint.Z);
var v = new Point3D(secondPoint.X - thirdPoint.X,
secondPoint.Y - thirdPoint.Y,
secondPoint.Z - thirdPoint.Z);
return new Vector3D(u.Y * v.Z - u.Z * v.Y, u.Z * v.X - u.X * v.Z, u.X * v.Y - u.Y * v.X);
}
I am working on a 3D application in WPF and having trouble with the camera. It should be possible to rotate the camera around its own axis (with other words, look around) using the mouse but I can not get it to work properly. I create the camera with the following code:
PerspectiveCamera perspectiveCamera = new PerspectiveCamera(new Point3D(0, 30, 0), new Vector3D(0, -1, 0), new Vector3D(0, 0, 1), 90);
perspectiveCamera.NearPlaneDistance = 0.001;
perspectiveCamera.FarPlaneDistance = 1000;
center = new TranslateTransform3D(0, 30, 0);
rot_x = new AxisAngleRotation3D(new Vector3D(1, 0, 0), 0);
rot_y = new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0);
rot_z = new AxisAngleRotation3D(new Vector3D(0, 0, 1), 0);
zoom = new ScaleTransform3D(1, 1, 1);
Transform3DGroup t = new Transform3DGroup();
t.Children.Add(zoom);
t.Children.Add(new RotateTransform3D(rot_x));
t.Children.Add(new RotateTransform3D(rot_y));
t.Children.Add(new RotateTransform3D(rot_z));
t.Children.Add(center);
perspectiveCamera.Transform = t;
myViewport3D.Camera = perspectiveCamera;
And then I try to rotate it using the following code:
private void OnViewportMouseMove(object sender, System.Windows.Input.MouseEventArgs e)
{
if (mouseLeftIsDown)
{
Point position = e.GetPosition(this);
double rotZAngle = (rot_z.Angle % 360) + oldLeftPosition.X - position.X;
double rotXAngle = (rot_x.Angle % 360) + position.Y - oldLeftPosition.Y;
if (rotZAngle < 0) //rotZAngle is negative, make it positive
{
rotZAngle = 360 + rotZAngle;
}
if (rotXAngle < 0) //rotXAngle is negative, make it positive
{
rotXAngle = 360 + rotXAngle;
}
rot_z.Angle = rotZAngle;
rot_x.Angle = rotXAngle;
oldLeftPosition = position;
}
}
However, it seams that the rotation is not happening around the camera and instead somehere else. The model that I load at position (0,0,0) is becoming visible after I rotate 180 degrees around the z axis which should not be the case.
What am I missing?
I'm currently playing around with the ILNumerics API and started to plot a few points in a cube.
Then I calculated a regression plane right through those points.
Now I'd like to plot the plane in the same scene plot but only with the same size than the point cloud.
I got the paramters of the plane (a,b,c): f(x,y) = a*x + b*y + c;
I know that only z is interesting for plotting a plane but I've got no clue how pass the right coodinates to the scene so that the plane size is about the same size than the maximum and minimum area of the points.
Could you guys give me a simple example of plotting a plane and a little suggetion how to set the bounds of that plane right?
Here is what I got so far:
private void ilPanel1_Load(object sender, EventArgs e)
{
// get the X and Y bounds and calculate Z with parameters
// plot it!
var scene = new ILScene {
new ILPlotCube(twoDMode: false) {
new ILSurface( ??? ) {
}
}
};
// view angle etc
scene.First<ILPlotCube>().Rotation = Matrix4.Rotation(
new Vector3(1f, 0.23f, 1), 0.7f);
ilPanel1.Scene = scene;
}
I hope that someone can help me ...
Thanks in advance !!!
You could take the Limits of the plotcube.Plots group and derive the coords from the bounding box from it. This gives you the min and max x and y coord for the plane. Use them to get the corresponding z values by evaluating you plane equation.
Once you have x,y and z of the plane, use them with ILSurface to plot the plane.
If you need more help, I can try to add an example.
#Edit: the following Example plots a plane through 3 arbitrary points. The planes orientation and position is computed by help of a plane function zEval. Its coefficients a,b,c are computed here from the 3 (concrete) points. You will have to compute your own equation coefficients here.
The plane is realized with a surface. One might as well take the 4 coords computed in 'P' and use an ILTriangleFan and an ILLineStrip to create the plane and the border. But the surface already comes with a Fill and a Wireframe, so we take this as a quick solution.
private void ilPanel1_Load(object sender, EventArgs e) {
// 3 arbitrary points
float[,] A = new float[3, 3] {
{ 1.0f, 2.0f, 3.0f },
{ 2.0f, 2.0f, 4.0f },
{ 2.0f, -2.0f, 2.0f }
};
// construct a new plotcube and plot the points
var scene = new ILScene {
new ILPlotCube(twoDMode: false) {
new ILPoints {
Positions = A,
Size = 4,
}
}
};
// Plane equation: this is derived from the concrete example points. In your
// real world app you will have to adopt the weights a,b and c to your points.
Func<float, float, float> zEval = (x, y) => {
float a = 1, b = 0.5f, c = 1;
return a * x + b * y + c;
};
// find bounding box of the plot contents
scene.Configure();
var limits = scene.First<ILPlotCube>().Plots.Limits;
// Construct the surface / plane to draw
// The 'plane' will be a surface constructed from a 2x2 mesh only.
// The x/y coordinates of the corners / grid points of the surface are taken from
// the limits of the plots /points. The corresponding Z coordinates are computed
// by the zEval function. So we give the ILSurface constructor not only Z coordinates
// as 2x2 matrix - but an Z,X,Y Array of size 2x2x3
ILArray<float> P = ILMath.zeros<float>(2, 2, 3);
Vector3 min = limits.Min, max = limits.Max;
P[":;:;1"] = new float[,] { { min.X, min.X }, { max.X, max.X } };
P[":;:;2"] = new float[,] { { max.Y, min.Y }, { max.Y, min.Y } };
P[":;:;0"] = new float[,] {
{ zEval(min.X, max.Y) , zEval(min.X, min.Y) },
{ zEval(max.X, max.Y) , zEval(max.X, min.Y) },
};
// create the surface, make it semitransparent and modify the colormap
scene.First<ILPlotCube>().Add(new ILSurface(P) {
Alpha = 0.6f,
Colormap = Colormaps.Prism
});
// give the scene to the panel
ilPanel1.Scene = scene;
}
This would create an image similar to this one:
#Edit2: you asked, how to disable the automatic scaling of the plot cube while adding the surface:
// before adding the surface:
var plotCube = scene.First<ILPlotCube>();
plotCube.AutoScaleOnAdd = false;
Alternatively, you can set the limits of the cube manually:
plotCube.Limits.Set(min,max);
You probably will want to disable some mouse interaction as well, since they would allow the user to rescale the cube in a similar (unwanted?) way:
plotCube.AllowZoom = false; // disables the mouse wheel zoom
plotCube.MouseDoubleClick += (_,arg) => {
arg.Cancel = true; // disable the double click - resetting for the plot cube
};
I need to rotate a single coordinate in WPF - C#.
The values x, y,z stored in GeometryModel3D[] points.
For example, coordinate(x, y, z) rotate at speficic-axis.
[UPDATE] Rotation transformation using quaternion. The problem are I don't get the new vector value and when I view the pointcloud, It seem drag away in Meshlab.
Matrix3D m = Matrix3D.Identity;
Quaternion q = new Quaternion(new Vector3D(320 / 2, y, maxDepth - minDepth), 90);
m.Rotate(q);
Vector3D myVectorToRotate = new Vector3D(((TranslateTransform3D)points[i].Transform).OffsetX, ((TranslateTransform3D)points[i].Transform).OffsetY, ((TranslateTransform3D)points[i].Transform).OffsetZ);
m.Transform(myVectorToRotate);
pointcloud.Add(new Point3D(myVectorToRotate.X,myVectorToRotate.Y,myVectorToRotate.Z));
I'm still can't get the correct value transformation.
I want to apply rotation transformation for 2nd point cloud scanned from kinect. Since the 1st scan data don't involved rotation, the code for capture data and usage is like below:
for (int y = 0; y < 240; y += resolution)
{
for (int x = 0; x < 320; x += resolution)
{
if (((TranslateTransform3D)points[i].Transform).OffsetZ >= minDepth
&& ((TranslateTransform3D)points[i].Transform).OffsetZ <= maxDepth)
{
pointcloud.Add(new Point3D(((TranslateTransform3D)points[i].Transform).OffsetX, ((TranslateTransform3D)points[i].Transform).OffsetY, ((TranslateTransform3D)points[i].Transform).OffsetZ));
}
i++;
}
}
Create any kind of matrix. For example a rotation matrix and then use the static method Vector.Multiply(...)
See also this post and the MSDN general transformation overview.
Examples for Vector3D:
3D transformation WPF
Rotate a vector by quaternion
Vector3D v = new Vector3D(1.0, -1.0, 2.0);
...
AxisAngleRotation3D axisAngle = new AxisAngleRotation3D(axis, angle);
RotateTransform3D myRotateTransform = new RotateTransform3D(axisAngle, centerVector);
v.Multiply(myRotateTransform);