I'm now detecting all the skeleton in a wpf application, I want to know how to detect the fingers to appear with the skeleton? I'm using microsoft Kinect for windows sdk ver 1.5
Many thanks
The Kinect unfortunately is not sensitive enough to recognize fingers so the library will not provide that as part of the skeleton. Maybe the Kinect 2.0 rumored to come out with the Xbox 720 will be able to provide that level of detail.
Candescent NUI might be what you're looking for. As OpenUserX03 said, however, the Kinect isn't ideal for this task. Perhabs you should have a look at the coming-up LEAP technology, which specializes in finger detection.
The cameras on the Kinect are not meant to be able to do joint tracking for the hands to that level of detail. Tracking the individual fingers is possible but wont be very reliable. To represent a players hand in the skeleton, you can check if the players hand is opened or closed. A possible way to see if the hand is open or closed would be to do pixel checks in an area surrounding the hand. This way with some tuning you could calculate how much of that area is the hand (using the depth and color stream) and how much is not. For example: If 40% of that area is the same depth as the hand joint, the hand is closed in a fist. If 70% of that area is the same depth as the hand joint, the hand is open. Then you could possibly use the angle of the elbow and wrist joint to be able to represent a closed or open hand at that angle on the skeleton.
Related
I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.
I have a programming project using Kinect Xbox one sensor. The project is mainly about turning any surface into an interactive touchable screen. I have collected all the hardware including the projector. In addition, I have done my research and downloaded the related packets such as Visual Studio in order to start coding in C#.
So, my question here:
Is there any any library that I could use which may facilitate me to determine the angles/depth of the surface?
Plus, I don't have a fully vision of the steps which need to be done for the next steps, so I would really appreciate it if there is anyone could draw me a small map for me for this project.
If you have trouble with getting started with kinect go through this
Quick start series
and you also might want to capture the depth of objects. For that try to use Kinect's depth image streams and the sdk itself does not provide much happy methods. You will have to do some image processing on that gray scaled depth stream. Then you can find the edges of a single object in different depths.
I am trying to make a app in c# that will detect / classify 3 poses of human body which are standing, sitting and lying. I can correctly detect / classify 2 of them (sitting and standing) with skeleton tracking. When it comes to lying on the floor, Kinect seems to not be able to track skeleton of a human body.
Does anyone have any experiences with skeleton tracking in lying position? As soon as I lye down, I am loosing joints positions. Is this task impossible? Thank you.
The Windows SDK for Kinect V1 is not great at recognizing people lying down.
V2 improves a lot on this, I would recommend you to participate in the beta or wait for V2.
One possible solution with V1, is to place a second camera vertically on the floor, use seated mode - so it detects on motion - and see if you find a person with that camera and not on the other, then it is likely lying down.
I have not tested this solution, but in theory it can work - try to set it up yourself and see if it fits your scenario needs.
I'm developing an application for the Kinect for my final year university project, and I have a requirement to develop a number of gesture recognition algorithms. I'd appreciate some advice on this.
My initial algorithm is detecting the users hand moving closer towards the kinect, within a certain time frame. For now i'll say this is an arbitrary 500ms.
My idea is as follows:
Record z-axis position every 100ms and store in List.
Each time a new position is recorded, check the z-position for each of the previous 4 positions in the List.
If the z position has varied by the required distance between any of those individually or collectively, fire off a gesture recognised event.
If gesture recognised, clear List, and start again.
This is the first time that I have tried anything like this, and would like some advise on my initial naive implementation.
Thanks.
Are you going to use the official Kinect SDK or opensource drivers(libfreenect or OpenNI) ?
If you're using the Kinect SDK you can start by having a look at something like:
Kinect SDK Dynamic Time Warping (DTW) Gesture Recognition
Candescent NUI
(Candescent NUI focuses more on finger detection though)
If you're planning to use opensource drivers, try OpenNI and NITE.
NITE comes with hand tracking and gestures(swipe, circle control, 2d sliders, etc.).
The idea is to at least have hand detection and carry on from there. If you've got that, you could implement something like an adaptation of the Unistroke Gesture Recognizer or look into other techniques like Motion Templates/MotionHistory, etc....adapting them to the new data you can play with now.
Goodluck!
If you're just trying to recognise the user swinging her hand towards you, your approach should work (despite being very susceptible to misfiring due to noisy data). What you're trying to do falls very nicely in the field of pattern recognition. For this, and very similar tasks, people very often use hidden Markov models with great success. You might want to check the Wikipedia article. I'm not a C# person, but as far as I know, Microsoft has very nice statistical inference libraries for C#, and they will definitely include HMM implementations.
I'm currently trying to implement a marble maze game for a WM 5.0 device and have been struggling with developing a working prototype. The prototype would need the user to control the ball using the directional keys and display realistic acceleration and friction.
I was wondering if anyone has experience with this and can give me some advice or point me in the right direction of what is essential and the best way to go around doing such a thing.
Thanks in advance.
Frank.
When reading your answer I didn't get the feeling you are looking for a game framework, but more: how can I easily model a ball with acceleration and friction.
For this you don't need a full fledged physics framework since it is relatively simple to do:
First create a timer which fires 30 times a second, and in the timer callback do the following:
Draw the maze background
Draw a ball at ballX, ballY (both floating point variables)
Add ballSpdX to ballX and add ballSpdY to ballY (the speed)
Now check the keys...
if the directional key is left, then subtract a small amount of ballSpdX
if the directional key is topleft, then subtract a small amount of ballSpdX and ballSpdY
etc
For collision do the following:
first move the ball in the horizontal direction. Then check the collisions with the walls. If a collision has been detected, then move the ball back to its previous positions and reverse the speed: ballSpdX = -ballSpdX
move the ball in the vertical direction. Then check the collisions with the walls. If a collision has been detected, then move the ball back to its previous positions and reverse the speed: ballSpdY = -ballSpdY
by handling the vertical and horizontal movement separately, the collision is much easier since you know which side the ball needs to bounce to.
last nu not least friction, friction is just doing this every frame: ballSpdX *= friction;
Where friction is something like 0.99. This makes sure the speed of the ball get's smaller every frame due to friction;
Hope this helped
I would recommend checking out XNA Studio 3, it has built in support for PC, Xbox 360 and mobile devices, and it's an official & free spin-off of Visual Studio from Microsoft.
http://creators.xna.com/en-US/
http://blogs.msdn.com/xna/
If you search around, people have written tutorials using physics (velocity on this one)
http://www.xnamachine.com/2007/12/fun-with-very-basic-physics.html
Try XFlib. It is in c++, but most cool things for the mobile have to be in c++, unfortunately. The site has some very cool free games. You can also see the source of most of the game too. Many have the physics you want.
Unfortunately, XNA doesn't support the windows mobile platform. However, as it seems that you're not having a problem with the technical issue of drawing on the WM device, but with the logic required to implement physics based movement, then it's not a bad idea to consider XNA to prototype the physics and movement code.
Check out some of the educational topics at creators.xna.com, and also "gamedev.net"
If you are at a loss, there's no mistake in trying a "lighter" tool for prototype. I would try Torque Game Builder - it spits out XNA, although maybe not meant for your platform.
At the Samples of the Windows Mobile SDK (check out the WM 6.0 SDK too), there are a couple of game applications. One of them is a simple puzzle game; not much, but it is a starting point.
The use of physics in game development is not specific for Windows Mobile. You can find a huge literature about this subject. This comes up in my mind now. If you are serious about game development, in any platform, you should do a little research first.
I dont know if this may help but i saw a Marble application for the Android platform on google code. Check it out here, it may throw some insight on the actual logic of the game.
The code is open sourced and written in java (using the android sdk) put nevertheless it may be useful. Also to better understand the code checkout the documentation for the SensorsManager, SensorEvent etc here
I wouldn't recommend using the same architecture as this application thou.