I'm getting a pose estimation data from python opencv by opening the camera,
and these data I'm sending them to unity in real time using UDP,
what I want is that I have a humanoid avatar I want to apply these data on it, so it will move and animate same as my body movements.
I want this in a real time, I tried to change the position or rotation of the avatar but still not working fine.
Can anyone help please
Related
So I am making something in unity for VR and am trying to remove slope bounce when going down slopes. Every tutorial I see is manipulating the player movement functions in the player script, but I am using the Action Based Continuous Move Provider for my movement so I don't have access to that stuff do I?
Basically, I am wondering how to smooth slope movement when using this locomotion system.
I have tried looking at a bunch of Youtube tutorials but they all end up modifying the players' movement directly so that it is parallel with the plane you're walking up. I understand it but I'm not sure how to do it with the continuous move provider.
As stated in the title. I will include a video of what I'm trying to make. I'm not sure how to make the part that the player swings on.
https://youtu.be/Z_RVr0nFpVE
Description of the mechanic: when the player taps a "web" is created that goes from the player to the roof at a slight angle forward. This web gets shorter over time. When the player stops tapping the web goes away.
I’m sure that there is no physical rope or string simulation. The easiest way is to slow the horizontal speed down, add decelerating up-directed velocity, and draw a rope from the origin to the player. After parameters adjustment the result should look similar to what you’ve shown.
I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.
How can I tell whether a person is facing a Kinect or showing it his/her back?
I am using the Microsoft Kinect SDK v1.7.
The Microsoft Kinect SDK does not track the back of users. It is unable to track a full body rotation, but only a more or less forward facing user.
Now granted, it might get "confused" and be able to track your skeleton when you're facing it with your back, but even then the skeleton will be aligned as if you were facing forward. If it does track you could potentially apply a heuristic that says "are my wrists further from the sensor than my hips?" or "how is the shoulder, elbow, wrist angle oriented", but it would all be inaccurate at best.
When the user stay without any movement it is impossible.But if user is walking the scenario can be solved like this:
Regarding to the Skeleton Coordinates in SDK 1.7, the Z coordinates is the distance from the user to the Kinect, So when user walk toward the camera Z is decreased and when user is going far from the camera the Z is increased.
This scenario is only useful when your user is walking normal.
Anyone know how to use Goblin XNA to implement ball control in the same way as one might find in the board game labyrinth?
There don't appear to be any tutorials or information at all regarding how to do this, despite there being a demo video displaying just such a thing.
I've setup the environment and gravity and added the ground and a sphere. I use WorldTransformation.Decompose to extract the current orientation of the board. I know the next step will be either ApplyLinearVelocity or AddForce to the sphere based on the current board orientation, but I don't know how to constantly apply these methods to the ball so that the ball is moving in response to the movement of the ball. Adding code to the Draw or Update methods only executes the code a single time. Anyone familiar with Goblin XNA at all and able to help?
As far as I can see in Goblin XNA the Update and Draw methods are called the same way as standard XNA games. Can you give more specific information? source code perhaps?