I'm trying to implement hand gesture recognition for Oculus Quest with Unity and the Unity Oculus integration package.
I've read the "Hand Tracking in Unity" documentation on the Oculus developer website, but they only talk about getting the current pinch of the fingers, which is not what I want:
https://developer.oculus.com/documentation/unity/unity-handtracking/
I thought about getting fingers flexion for each finger (with a value between 0 and 1 for example), and then training a k-NN model with the 5 features to then be able to recognize the nearest gesture. But I've been searching for hours and didn't find anything about getting finger position, the only thing I found is getting the pinch.
By looking in the OVRSkeleton.cs file (from the Oculus Integration package), I've been able to get the current Transform for each bone (so the position as a vector and the rotation as a quaternion), but I don't really know how to calculate or get an estimate for the finger flexion with that (or anything useful to perform gesture recognition)
OVRSkeleton skeleton = GetComponent<OVRSkeleton>();
skeleton.Bones[(int) OVRPlugin.BoneId.Hand_Index1].Transform.position
skeleton.Bones[(int) OVRPlugin.BoneId.Hand_Index1].Transform.rotation
The list of bones IDs is in the "Hand Tracking in Unity" documentation page.
In fact, what I want to implement seems to look exactly like this package:
https://assetstore.unity.com/packages/tools/integration/vr-hand-gesture-recognizer-oculus-quest-hand-tracking-168685
Any help, ideas or comments about how to calculate fingers flexion, or any other solution to implement gesture recognition would be greatly appreciated!
Thanks
A few things/links I've explored so far:
https://www.reddit.com/r/OculusQuest/comments/elrn7a/unity_hand_tracking_and_different_gestures/
https://forums.oculusvr.com/developer/discussion/89615/detect-custom-hand-gestures
https://github.com/jorgejgnz/HandTrackingGestureRecorder is something im trying currently, the docs dont say it has a depenedency, but apparently it does, and i dont have it working yet.
you can access the bone rotations at runtime and compare those to a recorded gesture. i think its closer to "template matching" than machine learning, measuring the error between two poses.
Related
Hey im pretty new in c#,
im building a plane sim and on the pc you can control the plane with the cursor through mouse movement.
now i want an digital joystick on mobile device, so i can control the plane on that.
does anybody have an idea how to code that?
thx
You can search for a tutorial online. A quick google search leads to this video and hundreds more. On the asset store there are multiple controllers already made, for example this one made by Unity, which includes everything you need.
Next time, before asking a question, try to google what you need, 99 times out of 100 the thing you're trying to achieve has already been made over and over again ;)
So i started to learn the MLAgents Package. I wanted to make a little 2D Space game with an AI which detects the environment (Player, other AIs, Asteroids etc.) through Rays.
I figured out that you can add a Ray Perception Sensor 2D Component to your agent. I understand how it works, but i cannot find anything on how to USE it with your code.
I just want to let the agent fly around and give it a Reward and shoot at it´s Target when it finds something with the Ray. Like => if (ray.tag == target) then shoot() and Reward(+1);
Unity's introduction tutorial to ml-agents, ml-agents: Hummingbirds, might be a good resource for you. In that tutorial, they make use of reycasts and integrate their functionality with ml-agents.
You can find the tutorial at Unity learn - ML-Agents: Hummingbirds.
I'm creating a simple game using Unity Studio which uses arrow keys to move the player. Now what I want to do is, use webcam as a movement detecting device and track user's movements and move the player according to them. (For example, when user move his hand to right, webcam can track it and move the player to the right...)
So, is this possible ? If so, what are the techniques APIs I should use for this...?
Thanks!
Have a look at OpenCV, it is being used a lot in the field of body and head tracking, and there's a unity plugin which implements it that might be useful.
Video Demo
It can't. But there is a lot of stuff out there on the internet.
This one has some interesting looking links.
Emgu CV looks interesting too.
There is some JavaScript handtracking tool too.
And of course there's kinect, but you need the 3d sensor.
You could also use LeapMoution.
I'm developing an application for the Kinect for my final year university project, and I have a requirement to develop a number of gesture recognition algorithms. I'd appreciate some advice on this.
My initial algorithm is detecting the users hand moving closer towards the kinect, within a certain time frame. For now i'll say this is an arbitrary 500ms.
My idea is as follows:
Record z-axis position every 100ms and store in List.
Each time a new position is recorded, check the z-position for each of the previous 4 positions in the List.
If the z position has varied by the required distance between any of those individually or collectively, fire off a gesture recognised event.
If gesture recognised, clear List, and start again.
This is the first time that I have tried anything like this, and would like some advise on my initial naive implementation.
Thanks.
Are you going to use the official Kinect SDK or opensource drivers(libfreenect or OpenNI) ?
If you're using the Kinect SDK you can start by having a look at something like:
Kinect SDK Dynamic Time Warping (DTW) Gesture Recognition
Candescent NUI
(Candescent NUI focuses more on finger detection though)
If you're planning to use opensource drivers, try OpenNI and NITE.
NITE comes with hand tracking and gestures(swipe, circle control, 2d sliders, etc.).
The idea is to at least have hand detection and carry on from there. If you've got that, you could implement something like an adaptation of the Unistroke Gesture Recognizer or look into other techniques like Motion Templates/MotionHistory, etc....adapting them to the new data you can play with now.
Goodluck!
If you're just trying to recognise the user swinging her hand towards you, your approach should work (despite being very susceptible to misfiring due to noisy data). What you're trying to do falls very nicely in the field of pattern recognition. For this, and very similar tasks, people very often use hidden Markov models with great success. You might want to check the Wikipedia article. I'm not a C# person, but as far as I know, Microsoft has very nice statistical inference libraries for C#, and they will definitely include HMM implementations.
I'm currently trying to implement a marble maze game for a WM 5.0 device and have been struggling with developing a working prototype. The prototype would need the user to control the ball using the directional keys and display realistic acceleration and friction.
I was wondering if anyone has experience with this and can give me some advice or point me in the right direction of what is essential and the best way to go around doing such a thing.
Thanks in advance.
Frank.
When reading your answer I didn't get the feeling you are looking for a game framework, but more: how can I easily model a ball with acceleration and friction.
For this you don't need a full fledged physics framework since it is relatively simple to do:
First create a timer which fires 30 times a second, and in the timer callback do the following:
Draw the maze background
Draw a ball at ballX, ballY (both floating point variables)
Add ballSpdX to ballX and add ballSpdY to ballY (the speed)
Now check the keys...
if the directional key is left, then subtract a small amount of ballSpdX
if the directional key is topleft, then subtract a small amount of ballSpdX and ballSpdY
etc
For collision do the following:
first move the ball in the horizontal direction. Then check the collisions with the walls. If a collision has been detected, then move the ball back to its previous positions and reverse the speed: ballSpdX = -ballSpdX
move the ball in the vertical direction. Then check the collisions with the walls. If a collision has been detected, then move the ball back to its previous positions and reverse the speed: ballSpdY = -ballSpdY
by handling the vertical and horizontal movement separately, the collision is much easier since you know which side the ball needs to bounce to.
last nu not least friction, friction is just doing this every frame: ballSpdX *= friction;
Where friction is something like 0.99. This makes sure the speed of the ball get's smaller every frame due to friction;
Hope this helped
I would recommend checking out XNA Studio 3, it has built in support for PC, Xbox 360 and mobile devices, and it's an official & free spin-off of Visual Studio from Microsoft.
http://creators.xna.com/en-US/
http://blogs.msdn.com/xna/
If you search around, people have written tutorials using physics (velocity on this one)
http://www.xnamachine.com/2007/12/fun-with-very-basic-physics.html
Try XFlib. It is in c++, but most cool things for the mobile have to be in c++, unfortunately. The site has some very cool free games. You can also see the source of most of the game too. Many have the physics you want.
Unfortunately, XNA doesn't support the windows mobile platform. However, as it seems that you're not having a problem with the technical issue of drawing on the WM device, but with the logic required to implement physics based movement, then it's not a bad idea to consider XNA to prototype the physics and movement code.
Check out some of the educational topics at creators.xna.com, and also "gamedev.net"
If you are at a loss, there's no mistake in trying a "lighter" tool for prototype. I would try Torque Game Builder - it spits out XNA, although maybe not meant for your platform.
At the Samples of the Windows Mobile SDK (check out the WM 6.0 SDK too), there are a couple of game applications. One of them is a simple puzzle game; not much, but it is a starting point.
The use of physics in game development is not specific for Windows Mobile. You can find a huge literature about this subject. This comes up in my mind now. If you are serious about game development, in any platform, you should do a little research first.
I dont know if this may help but i saw a Marble application for the Android platform on google code. Check it out here, it may throw some insight on the actual logic of the game.
The code is open sourced and written in java (using the android sdk) put nevertheless it may be useful. Also to better understand the code checkout the documentation for the SensorsManager, SensorEvent etc here
I wouldn't recommend using the same architecture as this application thou.