I'm creating a simple game using Unity Studio which uses arrow keys to move the player. Now what I want to do is, use webcam as a movement detecting device and track user's movements and move the player according to them. (For example, when user move his hand to right, webcam can track it and move the player to the right...)
So, is this possible ? If so, what are the techniques APIs I should use for this...?
Thanks!
Have a look at OpenCV, it is being used a lot in the field of body and head tracking, and there's a unity plugin which implements it that might be useful.
Video Demo
It can't. But there is a lot of stuff out there on the internet.
This one has some interesting looking links.
Emgu CV looks interesting too.
There is some JavaScript handtracking tool too.
And of course there's kinect, but you need the 3d sensor.
You could also use LeapMoution.
Related
Hey im pretty new in c#,
im building a plane sim and on the pc you can control the plane with the cursor through mouse movement.
now i want an digital joystick on mobile device, so i can control the plane on that.
does anybody have an idea how to code that?
thx
You can search for a tutorial online. A quick google search leads to this video and hundreds more. On the asset store there are multiple controllers already made, for example this one made by Unity, which includes everything you need.
Next time, before asking a question, try to google what you need, 99 times out of 100 the thing you're trying to achieve has already been made over and over again ;)
I am currently using the NVIDIA FleX package in Unity3D to create soft-bodied, jelly objects. I'm using Unity for animation only, not game dev.
What I am aiming to make is a transparent, jello sphere that retains its spherical shape with elasticity.
The first way I've tried to achieve this is using Flex Array + fluid setting. I've been playing with the settings but I can't get it to remain a sphere, it just becomes a more/less viscous fluid blob.
The second way is using the Flex Soft + fluid setting. It is much better in terms of physics but even with "draw particles" off, but the water droplets are each separated and not one jelly sphere.
This is what it looks like before hitting play, where the left is with Flex Array and the right is Flex Soft. The particles for Array are visible but not for Soft.
This is after hitting play, where the Array becomes one viscous fluid, but not a sphere, and the Soft is very jello-like but the water droplets are all separated.
A solution for either of the two ways would be much appreciated!
the standard approach is to create an Nvidia Flex Controller first...
Then you should also create a Flex Soft Asset...
Then you should create or select a game object and through the Add Component tab in the game object's inspector, find the Flex Soft Actor component [see it loaded up in the image below]...
Ensure your Soft Actor Asset mentioned previously has your required mesh type selected in the inspector option [I chose sphere in the image here] and check to see it looks something like the image below to be sure...
So after that, hopefully, you can just press play and see it in action as it drops and contorts for you.
If not, I have created a quick example for you to download as a unitypackage.
It may still require further resolution with the package manager as the Flex plugin is already inside the package I'm providing here[Using Unity 2020.3.5f1]
Flex in unity package
Anyway, hope this gets you started and somewhere towards your goal with Flex.
As a bonus, I've added a small script to move the flex object as this is outside of the usual approach as we have to call to the NVidia Flex component class of choice and invoke the ApplyImpulse method.
Cheers :)
Edit: There are a small 3 set of tutorials from NV Gameworks on integrating the plugin with Unity and exampling some stuff - this "stuff" is included in my downloadable package provided above.
Here is the youtube link to the 3 set:
Nvidia Gameworks FleX tutorials on Youtube
Edit 2: rereading your question made me think I hadnt really given you the definitive answer as to using a cloth actor and having the mesh renderer deform via the flex cloth deform component.
I am providing another link to another unity package here that will show this in action also allowing you to see the game object and how the cloth component from NVIDIA Flex works with the standard mesh filter and mesh renderer. Hope this more accurately answers your question :)
Example also using Cloth Actors as well as Soft Actors in NVIDIA FleX
I want different images to be displayed from different point of view. For the whole concept explaination please look at the images. they explain my idea/query!
As in the first image you see that there are three people at different angle looking at the monitor. Now i want the webcam to track the eyes and show the particular defined image to the user> For example: If user is at 45 degree angle then show image1.png
Depending upon the user's prespective of watching. The computer should show the image.
(the lady is the game character for representation purpose)
Can you please guide me on what steps can be taken to accomplish this? Is there any plugin available for unity that tracks faces? Please guide me
Also thanks for the compliments on my sketching skills xD
Stackoverflow is not really meant to recommend plugins, since the choice is usually opinion based so there is no exact answer.
That being said, on of the most common used API for computer vision (meaning interpreting images, including face recognition) is OpenCV, so that could be a good start for you to look at that.
And fortunately for you, there is a Unity plugin for OpenCV
It is too broad to give you more details about how it works here. You should try to make it work, and if you have a problem with your code, open a new question with the code portion that you struggle with.
PS: nice sketching skills
Perhaps easier option would be to use Kinect
(trying to detect face or eyes from that far might be shaky?)
With Kinect you can get skeletons for multiple people, and getting the angle between target and those kinect avatars would be easy.
If there is no space to put kinect in good position,
could consider placing it on the ceiling above (and then use depth data only to detect people in its view)
Only issue is that apparently Microsoft has stopped Windows kinect support,
so you would need to find 2nd hand versions.. (Unity Asset store still has some kinect plugins and examples available)
https://www.polygon.com/2018/1/2/16842072/xbox-one-kinect-adapter-out-of-stock-production-ended
Or look for kinect alternatives that work with unity, try RealSense cameras:
https://www.intel.sg/content/www/xa/en/architecture-and-technology/realsense-overview.html
I am creating a tower defense game, i need help with a method or program to make it a tile map so then i can slowly input enemies on a road have turrets that shoot at them etc, i have already tried:
Inputting a picture one by one which i failed on.
Also i tried using the program called tiled but i failed on understand how that makes it tile map so i am pretty lost on the definition on tile map now.
Could someone make some suggestions, links, explanations would be very helpful?
You can load maps generated by 'Tiled' using 'TiledSharp'.
You'll have to figure another way for getting the enemies to move on the map, getting the towers to shoot them, etc. Try this for a start:
Algorithms for realtime strategy wargame AI
You can also try to use Unity3D that might already have something ready for use (unless, of course, the reason is for you to learn the algorithms rather than making the game).
If you need more help please specify exactly what it is you're looking for, i.e. displaying the map, moving the forces, shooting at the forces, UI, etc.
I'm developing an application for the Kinect for my final year university project, and I have a requirement to develop a number of gesture recognition algorithms. I'd appreciate some advice on this.
My initial algorithm is detecting the users hand moving closer towards the kinect, within a certain time frame. For now i'll say this is an arbitrary 500ms.
My idea is as follows:
Record z-axis position every 100ms and store in List.
Each time a new position is recorded, check the z-position for each of the previous 4 positions in the List.
If the z position has varied by the required distance between any of those individually or collectively, fire off a gesture recognised event.
If gesture recognised, clear List, and start again.
This is the first time that I have tried anything like this, and would like some advise on my initial naive implementation.
Thanks.
Are you going to use the official Kinect SDK or opensource drivers(libfreenect or OpenNI) ?
If you're using the Kinect SDK you can start by having a look at something like:
Kinect SDK Dynamic Time Warping (DTW) Gesture Recognition
Candescent NUI
(Candescent NUI focuses more on finger detection though)
If you're planning to use opensource drivers, try OpenNI and NITE.
NITE comes with hand tracking and gestures(swipe, circle control, 2d sliders, etc.).
The idea is to at least have hand detection and carry on from there. If you've got that, you could implement something like an adaptation of the Unistroke Gesture Recognizer or look into other techniques like Motion Templates/MotionHistory, etc....adapting them to the new data you can play with now.
Goodluck!
If you're just trying to recognise the user swinging her hand towards you, your approach should work (despite being very susceptible to misfiring due to noisy data). What you're trying to do falls very nicely in the field of pattern recognition. For this, and very similar tasks, people very often use hidden Markov models with great success. You might want to check the Wikipedia article. I'm not a C# person, but as far as I know, Microsoft has very nice statistical inference libraries for C#, and they will definitely include HMM implementations.