I am trying to find a way to link the Kinect joints to the body parts of a character in Unity using Final IK.
Right now, all the joints are linked with the body parts correctly (I can move and the movement is duplicated to the character), but it seems like the scale between the joints (sent by the Kinect) is way smaller than the scale of the body parts. For example, when the Kinect detects my body, the character seems to collapse on itself. When I move, the movement is detected, but it is really small compared to the character's body.
Is there a way to sync those two scales?
Thank you very much!
I finally found a way of doing this and it works quite well. I'll try to keep this as simple as possible.
The main problem I had was to get a value of scale from the Kinect Joints that would be applicable to the size of my caracter.
In order to solve this, I added a calibration method that performs the following actions :
Ask the user to maintain a T position.
Reads the length of the left arm of the user (kinect)
Reads the length of the left arm of the character in game.
Compare both length and find a relative scale (ex: if the left arm lengths returned by the kinect if 4 and the caracter has a left arm of 2, then the scale to have is 4/2 = 2)
I had a small issue at first but ended up solving it. The values of Vector3 returned by the Kinect in Unity are relative to the Kinect's origin (camera). There for, I had to use a bit of linear algebra to solve this. Since both Vector3 (left hand and left shoulder) are relative to the Kinect's origin, I had to substract them in order to find the left arm length.
Thanks to linear algebra, we know that A + C = B, therefore, we can say that B - A = C. In our case, LeftHand - LeftShoulder = Left Arm. By applying this principle to the Vector3 received by the kinect, we now have the vector that represents the left arm. All we need to do is get the "magnitude" property and we now have its length.
That pretty much sums it up.
Related
In Unity, I create a cube with scale 1,1,1. Position is 0,1,0.
Then I placed it above a plane which is 15,1,5000. Position is 0,0,0.
I checked if the cube is below 1 in Y-axis, this will mean to me that the cube fall on the plane. I can control this cube by going left or right. If I go to left, there's no issue. If I go to right, my position becomes 0.9999998~. This makes my checking of falling become true even though the cube is still on the plane. Somehow, the cube seems not be be a perfect cube. Hope someone can enlighten me on why is this happening. Thanks!
This may not be the answer you want, but - in poor words - computers' arithmetic is finite (search for floating point arithmetic). So, the "perfect cube" that you're looking for does not exist in the finite representation a machine could perform.
Moreover, Unity has its own physics engine that (like all physics engines) approximates the calculus of real world during each operation (translation, rotation, scaling).
The only way in which you can overcome the problem is by doing comparisons not with exact values (0, 1) but with ranges.
To maintain "order" in the coordinate system of your scene you could also - at fixed intervals - "adjust" your values, so, for example, manually setting the coordinate value to 1 if it is between 0.95 and 1.05 (adjust the values with your world's coordinate system, of course).
Related note: in your comment you say "But my point is that why it seems like the cube is not perfect 1x1x1. Somehow it's like 1x1x0.9999998". The fact is that a VR system, like Unity, does not maintain the objects' size in memory, but their vertices' coordinates. You feel like the object's dimensions have changed due to the translation, but this is not true in a strict way: it's just a finite approximation of the vertices' values for their X, Y, Z.
I have a camera that needs to orbit locally around an object. This object has an arbitrary rotation, described by a normal vector. Imagine a spherical planet, with a camera looking down at a certain triangle on that planet.
My current implementation is to use the classic vector crossing method to generate a rotation matrix from the triangle's normal, then use that matrix as the basis for the standard orbit camera. This works fine near the equator of the planet, but once it gets near the poles, it starts blowing up, with the camera behaving increasingly erratically the closer that it gets to the very center of the pole.
I've determined that this is due to the first vector cross, as the two vectors are close to one another in that case - I'm not sure what the technical name for the phenomena is. If first vector is 0,1,0, the craziness happens when the normal is close to 0, 1, 0 or 0, -1, 0.
I've found quite a few descriptions of this problem, but no working solutions. The closest I've come was here: http://xboxforums.create.msdn.com/forums/p/13278/13278.aspx It mentions that to handle the 'singularity', use a different vector when it is detected. I can easily determine when the camera is on planet face that will cause this to happen (as my planet sphere is generated from 6 quadtrees projected to spherical coordinates), but there is a very noticeable snap when I switch to a new vector.
Here's the current code:
Vector3 triNormal; //the current normal of the target vertex
Vector3 origin = Vector3.Forward;
Matrix orientation.Forward = origin;
orientation.Up = triNormal;
orientation.Right = Vector3.Cross(orientation.Up, orientation.Forward);
orientation.Right.Normalize();
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
I've experimented with detecting when triNormal is on one of the pole faces, and setting 'origin' to something else such as Right. The camera then behaves properly once it is on the face, but is immediately snapped to a new rotation as it crosses over. This makes sense, as its reference vector has just changed, but needs to be eliminated for a smooth user experience. I tried figuring out how to offset the camera's yaw for the orbit camera to counteract the new coordinate system, but it doesn't seem to be a constant value, depending on where on the sphere the camera is currently aiming. I'm not sure how I could calculate what the difference is.
Also note that as it's in XNA and C#, I'm using a right-hand coordinate system.
I don't understand why you do this:
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
when you've already used previous orientation.Forward to get orientation.Right.
(If you are "crossing" normal vector I don't think you'll need to normalize them.)
Anyway, if triNormal is the current normal of the target vertex, and your camera is looking down to it, I think you should have:
orientation.Forward = -triNormal
I've got two polygons defined as a list of Vectors, I've managed to write routines to transform and intersect these two polygons (seen below Frame 1). Using line-intersection I can figure out whether these collide, and have written a working Collide() function.
This is to be used in a variable step timed game, and therefore (as shown below) in Frame 1 the right polygon is not colliding, it's perfectly normal for on Frame 2 for the polygons to be right inside each other, with the right polygon having moved to the left.
My question is, what is the best way to figure out the moment of intersection? In the example, let's assume in Frame 1 the right polygon is at X = 300, Frame 2 it moved -100 and is now at 200, and that's all I know by the time Frame 2 comes about, it was at 300, now it's at 200. What I want to know is when did it actually collide, at what X value, here it was probably about 250.
I'm preferably looking for a C# source code solution to this problem.
Maybe there's a better way of approaching this for games?
I would use the separating axis theorem, as outlined here:
Metanet tutorial
Wikipedia
Then I would sweep test or use multisampling if needed.
GMan here on StackOverflow wrote a sample implementation over at gpwiki.org.
This may all be overkill for your use-case, but it handles polygons of any order. Of course, for simple bounding boxes it can be done much more efficiently through other means.
I'm no mathematician either, but one possible though crude solution would be to run a mini simulation.
Let us call the moving polygon M and the stationary polygon S (though there is no requirement for S to actually be stationary, the approach should work just the same regardless). Let us also call the two frames you have F1 for the earlier and F2 for the later, as per your diagram.
If you were to translate polygon M back towards its position in F1 in very small increments until such time that they are no longer intersecting, then you would have a location for M at which it 'just' intersects, i.e. the previous location before they stop intersecting in this simulation. The intersection in this 'just' intersecting location should be very small — small enough that you could treat it as a point. Let us call this polygon of intersection I.
To treat I as a point you could choose the vertex of it that is nearest the centre point of M in F1: that vertex has the best chance of being outside of S at time of collision. (There are lots of other possibilities for interpreting I as a point that you could experiment with too that may have better results.)
Obviously this approach has some drawbacks:
The simulation will be slower for greater speeds of M as the distance between its locations in F1 and F2 will be greater, more simulation steps will need to be run. (You could address this by having a fixed number of simulation cycles irrespective of speed of M but that would mean the accuracy of the result would be different for faster and slower moving bodies.)
The 'step' size in the simulation will have to be sufficiently small to get the accuracy you require but smaller step sizes will obviously have a larger calculation cost.
Personally, without the necessary mathematical intuition, I would go with this simple approach first and try to find a mathematical solution as an optimization later.
If you have the ability to determine whether the two polygons overlap, one idea might be to use a modified binary search to detect where the two hit. Start by subdividing the time interval in half and seeing if the two polygons intersected at the midpoint. If so, recursively search the first half of the range; if not, search the second half. If you specify some tolerance level at which you no longer care about small distances (for example, at the level of a pixel), then the runtime of this approach is O(log D / K), where D is the distance between the polygons and K is the cutoff threshold. If you know what point is going to ultimately enter the second polygon, you should be able to detect the collision very quickly this way.
Hope this helps!
For a rather generic solution, and assuming ...
no polygons are intersecting at time = 0
at least one polygon is intersecting another polygon at time = t
and you're happy to use a C# clipping library (eg Clipper)
then use a binary approach to deriving the time of intersection by...
double tInterval = t;
double tCurrent = 0;
int direction = +1;
while (tInterval > MinInterval)
{
tInterval = tInterval/2;
tCurrent += (tInterval * direction);
MovePolygons(tCurrent);
if (PolygonsIntersect)
direction = +1;
else
direction = -1;
}
Well - you may see that it's allways a point of one of the polygons that hits the side of the other first (or another point - but thats after all almost the same) - a possible solution would be to calculate the distance of the points from the other lines in the move-direction. But I think this would end beeing rather slow.
I guess normaly the distances between frames are so small that it's not importand to really know excactly where it hit first - some small intersections will not be visible and after all the things will rebound or explode anyway - don't they? :)
I'm trying to understand and use OpenCV. I wanted to know if it is possible to find and measure an angle between two frames.
I explain : The cam is fix and the frames could rotate around the center and won't move. For now I managed to rotate manually and I would like to be able to compare frames and return the angle. For instance :
double getRotation(Image img1, Image img2) {
//Compare the frames
//Return the value
}
and then I rotate following that angle.
If you're able to detect static objects, e. g. background, on the frames then you may find points called good_features_to_track (cvGoodFeaturesToTrack) on the background and track this points using optical_flow (cvCalcOpticalFlowPyrLK).
If rotation is only on 'xy' plain you're able to detect rotation using cvGetAffineTransform.
Since only rotation is allowed (no translation and scaling) it's not difficult to determine an angle of rotation using transformation matrix, obtained by cvGetAffineTransform. That matrix looks like (see wikipedia):
Where \theta is the rotation angle
Well this might be very tricky, just a simpler solution might be to find the hough lines of the frames. Of course you would need to determined where the best and stable lines are which you can track between the two frames, once that is available, you can then find the angle between the two frames. What Andrey has suggested for finding the angles should be usable as well.
I am working on a project with a robot that has to find its way to an object and avoid some obstacles when going to that object it has to pick up.
The problem lies in that the robot and the object the robot needs to pick up are both one pixel wide in the pathfinder. In reality they are a lot bigger. Often the A* pathfinder chooses to place the route along the edges of the obstacles, sometimes making it collide with them, which we do not wish to have to do.
I have tried to add some more non-walkable fields to the obstacles, but it does not always work out very well. It still collides with the obstacles, also adding too many points where it is not allowed to walk, results in that there is no path it can run on.
Do you have any suggestions on what to do about this problem?
Edit:
So I did as Justin L suggested by adding a lot of cost around the obstacles which results in the folling:
Grid with no path http://sogaard.us/uploades/1_grid_no_path.png
Here you can see the cost around the obstacles, initially the middle two obstacles should look just like the ones in the corners, but after running our pathfinder it seems like the costs are overridden:
Grid with path http://sogaard.us/uploades/1_map_grid.png
Picture that shows things found on the picture http://sogaard.us/uploades/2_complete_map.png
Picture above shows what things are found on the picture.
Path found http://sogaard.us/uploades/3_path.png
This is the path found which as our problem also was before is hugging the obstacle.
The grid from before with the path on http://sogaard.us/uploades/4_mg_path.png
And another picture with the cost map with the path on.
So what I find strange is why the A* pathfinder is overriding these field costs, which are VERY high.
Would it be when it evaluates the nodes inside the open list with the current field to see whether the current fields path is shorter than the one inside the open list?
And here is the code I am using for the pathfinder:
Pathfinder.cs: http://pastebin.org/343774
Field.cs and Grid.cs: http://pastebin.org/343775
Have you considered adding a gradient cost to pixels near objects?
Perhaps one as simple as a linear gradient:
C = -mx + b
Where x is the distance to the nearest object, b is the cost right outside the boundary, and m is the rate at which the cost dies off. Of course, if C is negative, it should be set to 0.
Perhaps a simple hyperbolic decay
C = b/x
where b is the desired cost right outside the boundary, again. Have a cut-off to 0 once it reaches a certain low point.
Alternatively, you could use exponential decay
C = k e^(-hx)
Where k is a scaling constant, and h is the rate of decay. Again, having a cut-off is smart.
Second suggestion
I've never applied A* to a pixel-mapped map; nearly always, tiles.
You could try massively decreasing the "resolution" of your tiles? Maybe one tile per ten-by-ten or twenty-by-twenty set of pixels; the tile's cost being the highest cost of a pixel in the tile.
Also, you could try de-valuing the shortest-distance heuristic you are using for A*.
You might try to enlarge the obstacles taking size of the robot into account. You could round the corners of the obstacles to address the blocking problem. Then the gaps that are filled are too small for the robot to squeeze through anyway.
I've done one such physical robot. My solution was to move one step backward whenever there is a left and right turn to do.
The red line is as I understand your problem. The Black line is what I did to resolve the issue. The robot can move straight backward for a step then turn right.