face detection using a webcam - c#

i am developing a 3d project and would like to include the following feature :
As my webcam is watching my face, if i move to the left or the right the projects camera position moves to the left or the right to create an "look-around-the-corner" effect .
Does anyone know a face detection project in .NET c# ?

You can use OpenCV for .NET - there is a wrapper for .NET which also comes with a sample application doing face recognition for images - easy to adapt for the camera if you can extract the samples.

Are you familiar with web services? Depending on how often you need to scan for faces during your webcam stream, you could grab a frame, send it to a face detection web service and it would return the coordinates of faces in that frame.
You could use http://detection.myvhost.de/ because its free!
bafta ;)

Related

Kinect touchable Surface

I have a programming project using Kinect Xbox one sensor. The project is mainly about turning any surface into an interactive touchable screen. I have collected all the hardware including the projector. In addition, I have done my research and downloaded the related packets such as Visual Studio in order to start coding in C#.
So, my question here:
Is there any any library that I could use which may facilitate me to determine the angles/depth of the surface?
Plus, I don't have a fully vision of the steps which need to be done for the next steps, so I would really appreciate it if there is anyone could draw me a small map for me for this project.
If you have trouble with getting started with kinect go through this
Quick start series
and you also might want to capture the depth of objects. For that try to use Kinect's depth image streams and the sdk itself does not provide much happy methods. You will have to do some image processing on that gray scaled depth stream. Then you can find the edges of a single object in different depths.

Get user inputs from the webcame for the game

I'm creating a simple game using Unity Studio which uses arrow keys to move the player. Now what I want to do is, use webcam as a movement detecting device and track user's movements and move the player according to them. (For example, when user move his hand to right, webcam can track it and move the player to the right...)
So, is this possible ? If so, what are the techniques APIs I should use for this...?
Thanks!
Have a look at OpenCV, it is being used a lot in the field of body and head tracking, and there's a unity plugin which implements it that might be useful.
Video Demo
It can't. But there is a lot of stuff out there on the internet.
This one has some interesting looking links.
Emgu CV looks interesting too.
There is some JavaScript handtracking tool too.
And of course there's kinect, but you need the 3d sensor.
You could also use LeapMoution.

Ask about kinect fusion for AR

I want to create an AR application using Kinect Fusion.
I want to insert a 3d model into my kinect fusion reconstruction. The expected output is like these two videos:
https://www.youtube.com/watch?v=gChlRebNloA
https://www.youtube.com/watch?v=MvP4cHfUD5g
How to overlay the 3D object onto the 3d reconstruction?
Is there function in the SDK that can be used to achieve my goal?
Thanks
ps: I use C#
You should probably start to explore the KinectFusionExplorer-WPF KinectFusion available on codeplex:
http://kinectforwindows.codeplex.com/SourceControl/latest#v1.x/ToolkitSamples1.8.0
KinectFusionExplorer-WPF has some nice utility methods for matrix transformations, (virtual) camera configuration, etc.
The process for doing the AR augmentation is:
viewport: set your viewport using the same resolution as the one for the camera image,
video background: retrieve the video image from the Kinect and displays it in the background (either as done in the SDK examples or using a textured quad with the video image as done more traditionally),
geometric registration: use WPF3D and set a Camera object using: the intrinsic parameters of the Kinect camera (or the same ones used by KF), the camera pose you get via GetCurrentWorldToVolumeTransform() from your volume,
rendering: set any local transformation you want for your model (xaml) and render (using the way it's generally done with WPF3D).

How to draw and process some 3D points?

I have some points like
1,2,3
1,1,1
2,3,4
2,5,6
9,10,2
66,43,23
I want to draw and see them in the page but I don't know how can I do this. I read a little about XNA but I think there is better way. Can you help me?
update
I want to simulate 3D points as 3DMax does. it means have a 3d shape and can look it in all degrees.
What you need is a 3d graphics drawing API.
There are 2 main ones:
OpenGL
Direct3D
OpenGL is cross platform and is used on mobile devices like Android and iPhone.
Direct3D is Microsoft specific and generally is for Windows platform only.
Don't overestimate the value of building a cross-platform app. If it's a small pet project and you're not planning to go cross platform in the future, don't choose a more difficult API just because it's cross platform.
Programming OpenGL quite easy to start with via GLUT from C++, and there are tons of simple examples on the web
You can also use Direct3D via XNA, which is also easy to start with. There are a huge number of XNA 4.0 tutorials on youtube (I haven't watched it, it just has a very green likes bar!) and also the msdn tutorials

Interacting with avatar using Kinect and Unity

I want to move the avatar based on the movement the player using kinect and Unity, are there any good tutorials?
We are using unity and Kinect interface to create a simple application. Based on the movement of the player we need to move the avatar.
We are supposed to use Unity with GAKUNITY, No OpenNI or any third party tools.
Are there any good tutorials for GakUnity with Kinect?
GAK means Gadget Accelerator Kit
We just want to move any avatar with player movement in front of kinect interface. Even help in hand movement also highly appreciated.
You can also share useful links or books regarding to unity and kinect programming.
Custom Kinect Gesture Recognition using OpenNI and Unity3Dkinect.dashhacks.com
A tipster has tipped a lil tipple my way in the form of custom gesture recognition for the Kinect using OpenNI and Unity 3D. What this allows you to do is create you own custom gesture that are recognized by the kinect through the software interface.
Unity and Kinect tutorialnightmarekitty.com
For this chapter, we are going to be using a very popular game engine called Unity. By integrating OpenNI, Nite and Sensor Kinect into Unity we will control a 3d character and multiple user interfaces. After we cover the main Components, we will build an example of each from the bottom up.
Kinect Wrapper Example Projectwiki.etc.cmu.edu
This scene shows you how a skeleton is generated / tracked by placing spheres at each of the bones tracked by the Kinect, and how to use kinect to control your model. Use this to get a feel for what the Kinect is capable of. It also shows you how to prepare your GameObjects
I am not familiar with Unity 3d,but you can try using kinect with XNA. At the latest version of Kinect for windows SDK 1.5 and Developer Tookit 1.5.1,there is a sample demonstrating how to interact with 3d avatar using Kinect and XNA,you can find more information on http://msdn.microsoft.com/en-us/library/jj131041
I have no idea about GAKUNity, but you can use zigfu plugin for unity3d and do your project. It has got sample codes and character which you can use for reference, and the sample codes are commented nicely .
You can just import your bone, drag your bones on the zigskeleton script . just read through th zig skeleton script and you will understand the function and structures they are using.
And you can even refer the MEET THE KINECT book, they have explained about using zigfu in unity .

Categories