i used kinect sdk, c# language to develop game
first i want to take some measurements
i need help to find code which can find angle of rotation of right shoulder around z-axis?
You can use BoneOrientation class of the SDK and remove the rotations with y and x leaving the rotations of z. you can pick right shoulder to get the orientation and remove other places of the matrix so that it only represent the z rotation. If you do not know how to do this please reply. I assume that you have your hands wet on this.
cheers.
Related
I'm currently working on a game akin to Minecraft, i.e. a game with procedurally generated terrain. I'm doing this by using Perlin Noise and determining which ground to place where dependent on some mapping. Now I want to place Objects on top of said terrain and my goal is to have more of these objects in the center of the map than away from it, so a lower distribution of these objects the further we are from the center.
And I'm not sure how to achieve this, I've found several mentions of Poisson sampling and sampling of a circle but these only seem to work in a finite predfined space? Which wouldn't work in my case as the space and distance increases. In addition I would want to be able to input a coordinate and simply recieve yes or no as an answer if an object is present at that coordinate.
So I just wanted to ask if someone could nudge me into the right direction by providing some concepts I could look up?
I am trying to get the arm in the spine animation to follow the mouse so it looks good when I shoot, here is an example of what I want it to do but using spine: https://www.youtube.com/watch?v=eofB2Z4-00w I have been looking through the code trying to find a specific method to develop a way to do this but have yet figured out a way!
Spine(http://esotericsoftware.com/) is an animation tool that I am using in unity for animations for a game that my team and I are developing. I am currently on the player controller and have been stuck on this part of the project for a week or so. I have developed a way to create a fire point using the bonefoller.cs script within spine-unity runtime, so it shoots from a specific position(Bone) that I set it to. Just need a way to let the arm from the shoulder to follow the exact position of the mouse so it works seamlessly. If there is anyone out there that has a background in using spine with unity that would be awesome to get some help from you! If there is also some documentation furthering my knowledge on how to do so that would also be accepted :) Thank-you in advance if you help me!!! :D
You need to construct a vector that has its origin at the shoulder position and its end at the mouse position, and the same for the arm (origin at the shoulder, end at the hand). Determine the angle between the two vectors and perform the necessary rotation on the arm to close the gap between the two vectors.
How can I tell whether a person is facing a Kinect or showing it his/her back?
I am using the Microsoft Kinect SDK v1.7.
The Microsoft Kinect SDK does not track the back of users. It is unable to track a full body rotation, but only a more or less forward facing user.
Now granted, it might get "confused" and be able to track your skeleton when you're facing it with your back, but even then the skeleton will be aligned as if you were facing forward. If it does track you could potentially apply a heuristic that says "are my wrists further from the sensor than my hips?" or "how is the shoulder, elbow, wrist angle oriented", but it would all be inaccurate at best.
When the user stay without any movement it is impossible.But if user is walking the scenario can be solved like this:
Regarding to the Skeleton Coordinates in SDK 1.7, the Z coordinates is the distance from the user to the Kinect, So when user walk toward the camera Z is decreased and when user is going far from the camera the Z is increased.
This scenario is only useful when your user is walking normal.
I'm now detecting all the skeleton in a wpf application, I want to know how to detect the fingers to appear with the skeleton? I'm using microsoft Kinect for windows sdk ver 1.5
Many thanks
The Kinect unfortunately is not sensitive enough to recognize fingers so the library will not provide that as part of the skeleton. Maybe the Kinect 2.0 rumored to come out with the Xbox 720 will be able to provide that level of detail.
Candescent NUI might be what you're looking for. As OpenUserX03 said, however, the Kinect isn't ideal for this task. Perhabs you should have a look at the coming-up LEAP technology, which specializes in finger detection.
The cameras on the Kinect are not meant to be able to do joint tracking for the hands to that level of detail. Tracking the individual fingers is possible but wont be very reliable. To represent a players hand in the skeleton, you can check if the players hand is opened or closed. A possible way to see if the hand is open or closed would be to do pixel checks in an area surrounding the hand. This way with some tuning you could calculate how much of that area is the hand (using the depth and color stream) and how much is not. For example: If 40% of that area is the same depth as the hand joint, the hand is closed in a fist. If 70% of that area is the same depth as the hand joint, the hand is open. Then you could possibly use the angle of the elbow and wrist joint to be able to represent a closed or open hand at that angle on the skeleton.
I have 4 points on the same plane (a flat square object) detected in the camera and I am trying to work out the pose of this square relative to the camera.
I am using the latest version of EmguCV ( http://www.emgu.com/wiki/index.php/Main_Page ) which is a C# wrapper for OpenCV.
I have seen POSIT ( http://opencv.willowgarage.com/wiki/Posit ) but this will not work for coplanar points. I was wondering if there is anything that can solve coplanar pose estimation in OpenCV.
I have also seen solvePnp http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html#cv-solvepnp
which I believe will do what I want, but I cannot seem to find this functionality in EmguCV.
Does anyone know how to solve this using EmguCV?
Although SolvePnP is not available in emgu, you can still compute a homography once you have at least 4 point correspondences on a plane (which you have). Refer to documentation for CameraCalibration.FindHomography in case you're unsure. Once you have the homography, you can decompose this into a rotation and translation, and hence the camera pose. Take a look at this article.
Emgu.CV::CameraCalibration.SolvePnP(Emgu.CV.Structure.MCvPoint3D32f[], System.Drawing.PointF[], Emgu.CV.IntrinsicCameraParameters, Emgu.CV.CvEnum.SolvePnpMethod)
Upgrade and install the latest NuGet package.
Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error.