Expression blend enables you to import 3d models. I want to animate a 3d object with code. I just can't seem to figure out what are the property values that I have to modify in order to make an object rotate. Let me show you what I mean:
so if I want to rotate this object I could use the camera orbit tool and If I use it I can end up with something like:
I know I can create a storyboard and create the animation by modifying the object. I need to rotate the object along the x axis with a slider. If I modify just one value it will rotate in a weird way I actually have to change several properties if I wish to do so. For example when I am rotating the object along the x-axis with the camera orbit tool I can see that all these properties are changing. I need to figure out what is the algorithm being used to rotate the object.
The math to move the camera position around so that you appear to be rotating around the X axis is just the parametric equation of a circle:
where t is the angle from zero to 2 pi.
Imagine you are standing on the street looking at a house. The camera's coordinates have to follow a circle around the house and the latitude and longitude are continuously changing to keep the same distance from the house. So there is no one value you can change to make it rotate.
Once you know the camera position, the direction is just the difference between the origin and the camera position.
All this is not hard to calculate but there is an easier way. Instead, keep the camera fixed and rotate the object. This makes animations much easier. Here is an MSDN article contains examples of that approach, including animations:
3-D Transformations Overview
That article is meant for WPF and Visual Studio but you can easily adapt the same ideas to Expression Blend.
Related
I have a problem that I have not been able to find a thread about.
I have a camera feed of a circular object which I need to rotate BUT I want to detect the rotation of the object. If the object is rotated clockwise or counterclockwise. In the end I want to, for example, draw an rectangle on the object and that rectangle will the rotate the same direction as the object in the camera feed.
The camera feed (shown in GIF below) it's processed with Otsus algorithm and the object is always deformed somehow (i.e. it's not 100% round).
I looked into various motion detection algorithms and by comparing two frames you can get the motion if the object moves across the frame. But the methods won't work to determine the rotation.
If someone could be so kind to help me or to direct me to the correct direction I would be very grateful. And as before, if I'm being unclear I will of course try to explain further. Thank you!
I want to move little Spheres with the ManipulationHandler-script attached on a big-Sphere in my Scene.
The movement of the little spheres needs to be restricted to the big sphere's "shell".
I have accomplished the behaviour (link provides a gif) without using the Manipulation-handler, updating the X and Y of the little Sphere in the Update-function.
Is there a way to achieve the same behaviour with the ManipulationHandler without rewriting it?
According to your description, the Solver in MRTK be able to implement this idea without even writing any code. If you are not limited to only using ManipulationHandler for other reasons, I highly recommend that you to use RadialView. You can refer to the following steps to implement this feature with Solver:
Add SolverHandler and RadialView components to the small sphere.
In the RadialView component, select Custom Overrideset property in the Tacked Target Type field.
Set the Transfom Override field to the large sphere.
In the Radial View component, set MaxViewDegrees to 360, set Min Distance and Max Distance to the radius of the large sphere,
Disable Smoothing .
Now, the small sphere can rotate around the large sphere and keep a fixed distance from it.
I am using the Farseer Physics library with MonoGame.
In my game I use compound polygon bodies, created with BodyFactory.CreateCompoundPolygon(...);, but they have problem.
Their origin is in the top left corner, rather then the centroid like most Box2d objects. Since I need to rotate the bodies around a different pivot point than the top left, I found I can change the center of mass of a body (Body.LocalCenter). This works fine and dandy, and I can rotate the body using Body.ApplyAngularImpulse(...); or by changing Body.AngularVelocity, but here comes the problem:
Changing the rotation of the body using the methods I mentioned before works fine, and the pivot point used is the center of mass, but if I try to rotate the body by directly changing it's rotation (Body.Rotation), it rotates around the top left corner rather than the center of mass.
So in effect, Body.Rotation += 1; rotates around a different pivot point than with Body.AngularVelocity = 1;
You might be wondering why this is a problem, why I don't just use the methods I mentioned before to rotate the body. The problem is I need to be able to check the current rotation of the body. I can't figure out a way to do this. I can't use Body.Rotation since it returns the rotation around a wrong point.
TL;DR: Body.Rotation doesn't return rotation around center of mass, how to combat this?
You can have the pivot and mass center correctly set if you create your compound polygon from a list of points relative to the center.
For a 20 units side square :
instead of : [0,0][20,0][20,20][-20,20]
use : [-10,-10][10,-10][10,10][-10,10]
I'm asking a question that has been asked a million times before, but I still haven't found a good answer after going through these and also resorting to other sites:
How to rotate a graphic over global axes, and not to local axes?
rotating objects in opengl
How to rotate vertices exactly like with glRotatef() in OpenGL?
Rotate object about 3 axes in OpenGL
Rotate an object on its own axes in OpenGL
Is it possible to rotate an object around its own axis and not around the base coordinate's axis?
OpenGL Rotation - Local vs Global Axes
Task is relatively simple, I am making an sensor module with accelerometer and gyro onboard, and I need to display rotation.
I have a 3D model of the object in STL format, I've made an STL parser and I can visualise the object perfectly well.
When I try to rotate it I get all kinds of strange results.
I am using SharpGL (the Scene component).
I tried to write an "effect" that (for now) is controlled with 3 separate trackbars, but this will be the data from the sensors.
I wanted to do it as an effect because (in theory) I could have more objects in the scene and I only need to rotate a specific one (and also for educational purposes as I am relatively new to OpenGL).
I tried to rotate the object using quaternions, multiplying them and generating a rotation matrix, then applying gl.MultMatrix(transformMatrix.AsColumnMajorArray).
I also tried gl.Rotate(angleX, 1, 0, 0) (and similar for Y and Z axes).
Depending on the order of quaternion multiplication or gl.Rotate(...) statements, I would get the object to rotate about the local axis (first), something weird (second), and the "world" third. For example, if I did it with quaternions qX*qY*qZ and got the rotation matrix from that, I would get (I think) the rotations about a local X, "arbitrary" Y, and the world Z axes.
The part I cannot understand is that when I apply gl.Rotate(dx, 1, 0, 0) (and others for Y and Z rotations) inside the trackbar's Scroll event, the entire scene rotates about the X, Y, and Z axes and in that order, and exactly as I expected the rotation to happen.
My only issue with this approach is that the entire scene rotates. So if I (hypothetically) wanted to draw a room around the object, the whole room would rotate too, which is not what I want.
I will post any code requested, but I feel there is too much and the aforementioned links have similar code to what I have.
At the end of the day, I want my object (Polygon in SharpGL terms) to rotate about its own axes (or about the "world" axes, but be consistent).
EDIT:
I've placed the project in my dropbox: https://goo.gl/1LzNhn
If someone wants to look at it, any help will be greatly appreciated.
In the MainForm, the TbRotXYZScroll function has trackbar code and the Rotation class is where the problem resides (or so I think).
Rotate X, Y, and Z by say 45 degrees in that order. All seems to work great. Now rotate X, Y, and Z further by another 30 or so degrees, try to visualise about which axis the rotation is happening as you rotate...
X seems to be always world and Z seems to be always local, while Y is something in between (changing the order of rotation changes this).
Going into TbRotXYZScroll function and changing method to 2 does what I want (except the rotation of the entire scene). Tried quaternions and they produce the exact same result, so I must be doing something wrong...
Changing it to 1 produces similar result, but that is applying the rotation to the polygons alone (not what's being rendered as that includes local coordinate axes too).
At the end of the day, I want my object (Polygon in SharpGL terms) to
rotate about its own axes (or about the "world" axes, but be
consistent).
I think this answer you put in your question is somehow explaining the situation. In order to perform rotation around object axis:
1. Perform Translation/Rotation to your object and make the object axis overlap with one of base axis (x,y or z)
2. Perform your rotation
3. Revert your object back to its original position. (simply revert the translation/rotations at step1)
Note: While doing that, one thing you should consider is openGL apply transformations from the reverse order.
Example :
glRotatef(90,1,0,0);
glTranslatef(-5,-10,5);
drawMyObject();
In this case, opengl will first translate your object and then rotate. So while writing your transformations consider that. Here is my answer that may give you an idea.
How do I make so when I move the camera(with touch) it doesn't go beyond the scene borders?
How do I move the camera with touch so it moves strictly with scene parts, like slides(swipe-first slide, another swipe-another slide) with not going beyond the borders of the scene?
The game I'm making has a camera like in Disco Zoo game for android (I'm a newbie)
Technically, the scene doesn't really have a border. You'll need to define the border somehow within your game, then constrain the camera's position with something like Mathf.Clamp(value, min, max) in an Update() function on the camera.
How can you define the border? It's up to you. Some ideas:
Hard-code the values in the script that clamps the camera. Probably the quickest option, but not flexible
Make public parameters on the camera script that let you set min and max positions in the X and Y directions
If you have a background image: use the extents of that to define your camera's extents
Create empty objects in your scene that define the minimum and maximum extents of the scene. Put your "min" object at the top-left, and the "max" object at the top-right. Connect it to the camera script, then use those positions to see if you've gone too far in any given direction. The main reason to do this is that it's visual.
(Slower, but dynamic) If everything in your scene uses physics, you could search the entire scene for every Collider component, then find the furthest extents in each direction. However, this is probably going to be pretty slow (so you'll only want to do it once), it'll take a while to code, and you'll probably want to tweak the boundaries by hand anyway.