I am new to 3d Graphics and also wpf and need to combine these two in my current project. I add points and normals to MeshGeometry3D and add MeshGeometry3D to GeometryModel3D. Then add GeometryModel3D to ModelVisual3D and finally add ModelVisual3D to ViewPort3D. Now if i need to rotate i perform the required Transform either on GeometryModel3D or ModelVisual3D and add it again finally to the ViewPort3D. I'm running into a problems:
objViewPort3D.Remove(objModelVisual3D);
objGeometryModel3D.Transform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), angle += 15));
objModelVisual3D.Content = objGeometryModel3D;
objViewPort3D.Children.Add(objModelVisual3D);
to rotate it everytime by 15 degrees why must i do angle += 15 and not just 15? It seems that the stored model is not transformed by Transform operation but transformation is applied only when displaying by ViewPort3D. I want the transformation to actually change the coordinates in the stored MeshGeometry3D object so that when i do the transform next time it does on the previously transformed model and not the original model. How do i obtain this behaviour?
I think you can use Animation
some pseudo-code:
angle = 0
function onClick:
new_angle = angle + 30
Animate(angle, new_angle)
angle = new_angle
You have to do angle += 15 because you're applying a new RotateTransform3D each time.
This might help:
public RotateTransform3D MyRotationTransform { get; set; }
...
//constructor
public MyClass()
{
MyRotationTransform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0));
}
//in your method
MyRotationTransform.Rotation += 15;
objGeometryModel3D.Transform = MyRotationTransform;
Correct, the position of the mesh is not transformed by the "Transform" operation. Instead the Transform property defines the world transform of the mesh during rendering.
In 3d graphics the world transform transforms the points of the mesh from object space to world space during the render of the object.
(Image from World, View and Projection Matrix Unveiled)
It's much faster to set the world transform and let the renderer draw the mesh in a single transform than transforming each vertex of a mesh, like you want.
Related
I am trying to make realistic-looking procedurally generated buildings. So far I have made a texture generation program for them. I would like to apply that texture to a simple cube in Unity, however, I don't want to apply the texture to all of the faces. Currently when I apply the texture to the cube's material it apply's the same texture to all of the faces and on some of the faces the texture is upside down. Do you recommend that I make plane objects and apply the textures to each of those (and form a cube that way). I know this would work, but is it efficient at a large scale? Or is there a way to apply different textures to individual faces of a cube in C#?
Considering that you are trying to create buildings you should procedurally generate your own 'cube'/building mesh data.
For example:
using UnityEngine;
using System.Collections;
public class ExampleClass : MonoBehaviour {
public Vector3[] newVertices;
public Vector2[] newUV;
public int[] newTriangles;
void Start() {
Mesh mesh = new Mesh();
mesh.vertices = newVertices;
mesh.uv = newUV;
mesh.triangles = newTriangles;
GetComponent<MeshFilter>().mesh = mesh;
}
}
Then you populate the verticies, and tris with data.
For example, you could create a small 1 by 1 cube around origin 0 with:
int size = 1;
newVertices = new Vector3[]{
Vector3(-size, -size, -size),
Vector3(-size, size, -size),
Vector3( size, size, -size),
Vector3( size, -size, -size),
Vector3( size, -size, size),
Vector3( size, size, size),
Vector3(-size, size, size),
Vector3(-size, -size, size)
};
Then because you only want to render the texture on 1 of the meshes faces;
newUV = new Vector2[]{
Vector2(0,0),
Vector2(0,1),
Vector2(1,0),
Vector2(1,1),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0)
};
Note that only allocating UV coords for some of the verts This will effectively only place your desired texture on 1 of the faces and then depending on how you allocate the tris, the rest of the UVs can be altered as you see fit to appear as though they are untextured.
Please note that I wrote this with out an IDE so there may be some syntax errors. Also, this isn't the most straight forward process, but I promise that it far exceeds the use of a series of quads as a building because with this you can make a whole range of shapes 'easily'.
Resources:
http://docs.unity3d.com/ScriptReference/Mesh.html
https://msdn.microsoft.com/en-us/library/windows/desktop/bb205592%28v=vs.85%29.aspx
http://docs.unity3d.com/Manual/GeneratingMeshGeometryProcedurally.html
AFAIK unity does not natively support this.
There are however some easy ways to work around. Your example of using a plane is a good thought, but using a quad would make more sense. Or even easier, create a multi faced cube as a model and export it as for example a .fbx
I have an object in my game that has a few meshes and when I try to rotate either of the meshes either way, it only rotates it around world axis, and not its local axis. I have a rotation = Matrix.Identity in a class constructor. Every mesh has this class attached to it. Then this class also contains methods:
...
public Matrix Transform{ get; set; }
public void Rotate(Vector3 newRot)
{
rotation = Matrix.Identity;
rotation *= Matrix.CreateFromAxisAngle(rotation.Up, MathHelper.ToRadians(newRot.X));
rotation *= Matrix.CreateFromAxisAngle(rotation.Right, MathHelper.ToRadians(newRot.Y));
rotation *= Matrix.CreateFromAxisAngle(rotation.Forward, MathHelper.ToRadians(newRot.Z));
CreateMatrix();
}
private void CreateMatrix()
{
Transform = Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(Position);
}
...
And now the Draw() method:
foreach (MeshProperties mesh in model.meshes)
{
foreach (BasicEffect effect in mesh.Mesh.Effects)//Where Mesh is a ModelMesh that this class contains information about
{
effect.View = cam.view;
effect.Projection = cam.projection;
effect.World = mesh.Transform;
effect.EnableDefaultLighting();
}
mesh.Mesh.Draw();
}
EDIT:
I am afraid either I screwed somewhere up, or your tehnique does not work, this is what I did. Whenever I move the whole object(Parent), I set its Vector3 Position; to that new value. I also set every MeshProperties Vector3 Position; to that value. And then inside CreateMatrix() of MeshProperties I did like so:
...
Transform = RotationMatrix * Matrix.CreateScale(x, y, z) * RotationMatrix * Matrix.CreateTranslation(Position) * Matrix.CreateTranslation(Parent.Position);
...
Where:
public void Rotate(Vector3 newRot)
{
Rotation = newRot;
RotationMatrix = Matrix.CreateFromAxisAngle(Transform.Up, MathHelper.ToRadians(Rotation.X)) *
Matrix.CreateFromAxisAngle(Transform.Forward, MathHelper.ToRadians(Rotation.Z)) *
Matrix.CreateFromAxisAngle(Transform.Right, MathHelper.ToRadians(Rotation.Y));
}
And Rotation is Vector3.
RotationMatrix and Transform are both set to Matrix.Identity in the constructor.
The problem is if I try to rotate around for example Y axis, he should rotate in a circle while "standing still". But he moves around while rotating.
I'm not entirely certain this is what you want. I'm assuming here you have an object, with some meshes and positions offset from the position and orientation of the main object position and you want to rotate the child object around its local axis relative to the parent.
Matrix.CreateTranslation(-Parent.Position) * //Move mesh back...
Matric.CreateTranslation(-Mesh.PositionOffset) * //...to object space
Matrix.CreateFromAxisAngle(Mesh.LocalAxis, AngleToRotateBy) * //Now rotate around your axis
Matrix.CreateTranslation(Mesh.PositionOffset) * //Move the mesh...
Matrix.CreateTranslation(Parent.Position); //...back to world space
Of course you usually store a transform matrix which transforms a mesh from object space to world space in one step, and you'd also store the inverse. You also store the mesh in object coordinates all the time and only move it into world coordinate for rendering. This would simplify things a little:
Matrix.CreateFromAxisAngle(Mesh.LocalAxis, AngleToRotateBy) * //We're already in object space, so just rotate
ObjectToWorldTransform *
Matrix.CreateTranslation(Parent.Position);
I think you could simply set Mesh.Transform in your example to this and be all set.
I hope this is what you were looking for!
The problem was that, when I was exporting model as .FBX the pivot point wasnt in model centre. Thus making the model move while rotating.
I'm working on an RPG game that has a Top-Down view. I want to load a picture into the background which is what the character is walking on, but so far I haven't figured out how to correctly have the background redraw so that it's "scrolling". Most of the examples I find are auto scrolling.
I want the camera to remained centered at the character until you the background image reaches its boundaries, then the character will move without the image re-drawing in another position.
Your question is a bit unclear, but I think I get the gist of it. Let's look at your requirements.
You have an overhead camera that's looking directly down onto a two-dimensional plane. We can represent this as a simple {x, y} coordinate pair, corresponding to the point on the plane at which the camera is looking.
The camera can track the movement of some object, probably the player, but more generally anything within the game world.
The camera must remain within the finite bounds of the game world.
Which is simple enough to implement. In broad terms, somewhere inside your Update() method you need to carry out steps to fulfill each of those requirements:
if (cameraTarget != null)
{
camera.Position = cameraTarget.Position;
ClampCameraToWorldBounds();
}
In other words: if we have a target object, lock our position to its position; but make sure that we don't go out of bounds.
ClampCameraToBounds() is also simple to implement. Assuming that you have some object, world, which contains a Bounds property that represents the world's extent in pixels:
private void ClampCameraToWorldBounds()
{
var screenWidth = graphicsDevice.PresentationParameters.BackBufferWidth;
var screenHeight = graphicsDevice.PresentationParameters.BackBufferHeight;
var minimumX = (screenWidth / 2);
var minimumY = (screnHeight / 2);
var maximumX = world.Bounds.Width - (screenWidth / 2);
var maximumY = world.Bounds.Height - (screenHeight / 2);
var maximumPos = new Vector2(maximumX, maximumY);
camera.Position = Vector2.Clamp(camera.Position, minimumPos, maximumPos);
}
This makes sure that the camera is never closer than half of a screen to the edge of the world. Why half a screen? Because we've defined the camera's {x, y} as the point that the camera is looking at, which means that it should always be centered on the screen.
This should give you a camera with the behavior that you specified in your question. From here, it's just a matter of implementing your terrain renderer such that your background is drawn relative to the {x, y} coordinate specified by the camera object.
Given an object's position in game-world coordinates, we can translate that position into camera space:
var worldPosition = new Vector2(x, y);
var cameraSpace = camera.Position - world.Postion;
And then from camera space into screen space:
var screenSpaceX = (screenWidth / 2) - cameraSpace.X;
var screenSpaceY = (screenHeight / 2) - cameraSpace.Y;
You can then use an object's screen space coordinates to render it.
Your can represent the position in a simple Vector2 and move it towards any entity.
public Vector2 cameraPosition;
When you load your level, you will need to set the camera position to your player (Or the object it should be at)
You will need a matrix and some other stuff, As seen in the code below. It is explained in the comments. Doing it this way will prevent you from having to add cameraPosition to everything you draw.
//This will move our camera
ScrollCamera(spriteBatch.GraphicsDevice.Viewport);
//We now must get the center of the screen
Vector2 Origin = new Vector2(spriteBatch.GraphicsDevice.Viewport.Width / 2.0f, spriteBatch.GraphicsDevice.Viewport.Height / 2.0f);
//Now the matrix, It will hold the position, and Rotation/Zoom for advanced features
Matrix cameraTransform = Matrix.CreateTranslation(new Vector3(-cameraPosition, 0.0f)) *
Matrix.CreateTranslation(new Vector3(-Origin, 0.0f)) *
Matrix.CreateRotationZ(rot) * //Add Rotation
Matrix.CreateScale(zoom, zoom, 1) * //Add Zoom
Matrix.CreateTranslation(new Vector3(Origin, 0.0f)); //Add Origin
//Now we can start to draw with our camera, using the Matrix overload
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default,
RasterizerState.CullCounterClockwise, null, cameraTransform);
DrawTiles(spriteBatch); //Or whatever method you have for drawing tiles
spriteBatch.End(); //End the camera spritebatch
// After this you can make another spritebatch without a camera to draw UI and things that will not move
I added the zoom and rotation if you want to add anything fancy, Just replace the variables.
That should get you started on it.
However, You will want to make sure the camera is in bounds, and make it follow.
Ill show you how to add smooth scrolling, However if you want simple scrolling see this sample.
private void ScrollCamera(Viewport viewport)
{
//Add to the camera positon, So we can see the origin
cameraPosition.X = cameraPosition.X + (viewport.Width / 2);
cameraPosition.Y = cameraPosition.Y + (viewport.Height / 2);
//Smoothly move the camera towards the player
cameraPosition.X = MathHelper.Lerp(cameraPosition.X , Player.Position.X, 0.1f);
cameraPosition.Y = MathHelper.Lerp(cameraPosition.Y, Player.Position.Y, 0.1f);
//Undo the origin because it will be calculated with the Matrix (I know this isnt the best way but its what I had real quick)
cameraPosition.X = cameraPosition.X -( viewport.Width / 2);
cameraPosition.Y = cameraPosition.Y - (viewport.Height / 2);
//Shake the camera, Use the mouse to scroll or anything like that, add it here (Ex, Earthquakes)
//Round it, So it dosent try to draw in between 2 pixels
cameraPosition.Y= (float)Math.Round(cameraPosition.Y);
cameraPosition.X = (float)Math.Round(cameraPosition.X);
//Clamp it off, So it stops scrolling near the edges
cameraPosition.X = MathHelper.Clamp(cameraPosition.X, 1f, Width * Tile.Width);
cameraPosition.Y = MathHelper.Clamp(cameraPosition.Y, 1f, Height * Tile.Height);
}
Hope this helps!
Imagine a segmented creature such as a centipede. With control of the head segment, the body segments are attached to the previous body segment by a point.
As the head moves (in the 8 cardinal/inter-cardinal directions for now) a point moves in relation to its rotation.
public static Vector2 RotatePoint(Vector2 pointToRotate, Vector2 centerOfRotation, float angleOfRotation)
{
Matrix rotationMatrix = Matrix.CreateRotationZ(angleOfRotation);
return Vector2.Transform(pointToRotate - centerOfRotation, rotationMatrix);
}
Was going to post a diagram here but you know...
center(2) point(2) center(1) point(1)
point(1)
point(2) ^ |
/ \ |
| |
center(2) center(1) \ /
V
I have thought of using a rectangle property/field for the base sprite,
private Rectangle bounds = new Rectangle(-16, 16, 32, 32);
and checking that a predefined point within the body segment remains within the head sprite's bounds.
Though I am currently doing:
private static void handleInput(GameTime gameTime)
{
Vector2 moveAngle = Vector2.Zero;
moveAngle += handleKeyboardMovement(Keyboard.GetState()); // basic movement, combined to produce 8 angles
// of movement
if (moveAngle != Vector2.Zero)
{
moveAngle.Normalize();
baseAngle = moveAngle;
}
BaseSprite.RotateTo(baseAngle);
BaseSprite.LeftAnchor = RotatePoint(BaseSprite.LeftAnchor,
BaseSprite.RelativeCenter, BaseSprite.Rotation); // call RotatePoint method
BaseSprite.LeftRect = new Rectangle((int)BaseSprite.LeftAnchor.X - 1,
(int)BaseSprite.LeftAnchor.Y - 1, 2, 2);
// All segments use a field/property that is a point which is suppose to rotate around the center
// point of the sprite (left point is (-16,0) right is (16,0) initially
// I then create a rectangle derived from that point to make use of the .Intersets method of the
// Rectangle class
BodySegmentOne.RightRect = BaseSprite.LeftRect; // make sure segments are connected?
BaseSprite.Velocity = moveAngle * wormSpeed;
//BodySegmentOne.RightAnchor = BaseSprite.LeftAnchor;
if (BodySegmentOne.RightRect.Intersects(BaseSprite.LeftRect)) // as long as there two rects occupy the
{ // same space move segment with head
BodySegmentOne.Velocity = BaseSprite.Velocity;
}
}
As it stands now, the segment moves with head but in a parallel fashion. I would like to get a more nuanced movement of the segment as it is being dragged by the head.
I understand that the coding of such movement will be much more involved than what I have here. Some hints or directions as to how I should look at this problem would be greatly appreciated.
I will describe what you need to do using a physics engine like Farseer but the same holds if you want to write your own physics engine.
Create a Body for each articulated point of the centipede body.
Create a Shape that encapsulates the outer shell that will be attached to each point.
Attach the Body and Shape using a Fixture. This creates one link in your centipede.
Attach multiple links using a SliderJoint.
For example - assuming that the outer shell of each link is a circle, here's how to create two links and join them together.
Fixture fix1 = FixtureFactory.CreateCircle(_world, 0.5f, 1, new Vector2(-5, 5));
fix1.Body.BodyType = BodyType.Dynamic;
Fixture fix2 = FixtureFactory.CreateCircle(_world, 0.5f, 1, new Vector2(5, 5));
fix2.Body.BodyType = BodyType.Dynamic;
JointFactory.CreateSliderJoint(_world, fix1.Body, fix2.Body, Vector2.Zero, Vector2.Zero, 10, 15);
Now applying a force on any of the bodies or collisions on the shapes will drag the second joint around - just like you want.
This is all just stick physics - so you could implement your own if you REALLY wanted to. ;)
Am I doing the following right?
Well obviously not cause otherwise I wont be posting a question here, but I'm trying to do a Quaternion rotation of a model around another model.
Lets say I have a box model that has a vector3 position and a float rotation angle.
I also have a frustum shaped model that is pointing towards the box model, with its position lets say 50 units from the box model. The frustum also has a vector3 position and a Quaternion rotation.
In scenario 1, the box and frustum are "unrotated". This is all fine and well.
In scenario 2, I rotate the box only and I want the frustum to rotate with it (kinda like a chase camera) with the frustum always pointing directly at the box and at the same distance from the box as in the unrotated distance. Obviously if I just rotate the model and the frustum by using Matrix.CreateRotationY() for both the box and the frustum, the frustum is slightly offset to the side.
So I thought a Quaternion rotation of the frustum around the box would be best?
To this end I have tried the following, with no luck. It draws my models on the screen, but it also draws what looks like a giant box to the screen and no matter how far away I move the camera the box is always in the way
For the purpose of testing, I have 3 boxes and their 3 associated frustums
In my Game1 class I initialize the box[0] with positions and rotations
boxObject[0].Position = new Vector3(10, 10, 10);
boxObject[1].Position = new Vector3(10, 10, 10);
boxObject[2].Position = new Vector3(10, 10, 10);
boxObject[0].Rotation = 0.0f;
boxObject[1].Rotation = 45.0f;
boxObject[2].Rotation = -45.0f;
So all 3 boxes drawn at the same position but at different angles.
Then to do the frustums, I initiate their position:
float f = 50.0f;
frustumObject[0].Position = new Vector3(boxObject[0].Position.X,
boxObject[0].Position.Y, boxObject[0].Position.Z + f);
frustumObject[1].Position = new Vector3(boxObject[1].Position.X,
boxObject[1].Position.Y, boxObject[1].Position.Z + f);
frustumObject[2].Position = new Vector3(boxObject[2].Position.X,
boxObject[2].Position.Y, boxObject[2].Position.Z + f);
And then try and rotate around their associated box model:
frustumObject[0].ModelRotation = new Quaternion(boxObject[0].Position.X,
boxObject[0].Position.Y, boxObject[0].Position.Z + f, 0);
frustumObject[0].ModelRotation = new Quaternion(boxObject[0].Position.X,
boxObject[0].Position.Y, boxObject[0].Position.Z + f, 45);
frustumObject[0].ModelRotation = new Quaternion(boxObject[0].Position.X,
boxObject[0].Position.Y, boxObject[0].Position.Z + f, -45);
And finally, to draw the models, I Draw() them in my GameModel class which also has:
public Model CameraModel { get; set; }
public Vector3 Position { get; set; }
public float Rotation { get; set; }
public Quaternion ModelRotation { get; set; }
public void Draw(Matrix view, Matrix projection)
{
transforms = new Matrix[CameraModel.Bones.Count];
CameraModel.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model
foreach (ModelMesh myMesh in CameraModel.Meshes)
{
foreach (BasicEffect myEffect in myMesh.Effects)
{
// IS THIS CORRECT?????
myEffect.World = transforms[myMesh.ParentBone.Index] *
Matrix.CreateRotationY(Rotation) * Matrix.CreateFromQuaternion(ModelRotation) * Matrix.CreateTranslation(Position);
myEffect.View = view;
myEffect.Projection = projection;
myEffect.EnableDefaultLighting();
myEffect.SpecularColor = new Vector3(0.25f);
myEffect.SpecularPower = 16;
}
myMesh.Draw();
}
}
Can anyone spot where I am going wrong? Is it because I am doing 2 types of rotations n the Draw()?
myEffect.World = transforms[myMesh.ParentBone.Index] *
Matrix.CreateRotationY(Rotation) * Matrix.CreateFromQuaternion(ModelRotation) * Matrix.CreateTranslation(Position);
From a quick glance, it would be best to create your Quaternions using a static create method such as Quaternion.CreateFromAxisAngle(Vector3. UnitY, rotation). The values of X,Y,Z and W of a Quaternion do not relate to position in any way. The handy static methods take care of the tricky math.
In your situation it appears as though you want to keep the frustum pointing at the same side of the box as the box rotates, therefore rotating the frustum about the box. This requires a slightly different approach to the translation done in your draw method.
In order to rotate an object about another, you first need to translate the object so that the centre of the desired rotation is at the origin. Then rotate the object and translate it back by the same amount as the first step.
So in you situation, something like this should do it (untested example code to follow);
// Construct the objects
boxObject.Position = new Vector3(10, 10, 10);
boxObject.Rotation = 45.0f;
frustumObject.Position = new Vector3(0, 0, 50f); // Note: this will be relative to the box (makes the math a bit simpler)
frustumObject.TargetPosition = boxObject.Position;
frustumObject.ModelRotation = Quaternion.CreateFromAxisAngle(Vector3. UnitY, boxObject.Rotation); // Note: this rotation angle may need to be in radians.
// Box Draw()
// Draw the box at its position, rotated about its centre.
myEffect.World = transforms[myMesh.ParentBone.Index] * Matrix.CreateTranslation(Position) * Matrix.CreateRotationY(Rotation);
// Frustum Draw()
// Draw the frustum facing the box and rotated about the boxes centre.
myEffect.World = transforms[myMesh.ParentBone.Index] * Matrix.CreateTranslation(Position) * Matrix.CreateFromQuaternion(ModelRotation) * Matrix.CreateTranslation(TargetPosition);
Assumptions:
The box rotates about its own centre
The frustum stays facing the box and rotates around the boxes centre
Hope this helps.