C# wiimote library - computing distance from sensor bar to wiimote - c#

I'm working with the C# managed wiimote library for a little fun project I'm working on, But I'm having trouble finding a good tutorial on how to calculate how far the wiimote is from the monitor (i.e sensor bar). I want to create a zoom effect where an object will grow or shrink based on how far back you move the wiimote from the screen.
Can anyone help me with this?

The system is not calibrated, so it isn't able to tell you the actual distance. However it can tell you the relative distance. The IR sensor on the remote works by telling you the size and location of up the IR light sources on the sensor bar. When the remote moves farther away, the lights get smaller and closer together; when the remote gets closer, the lights get larger and farther away from each other. I would use the distance between the lights, as the size of the dots only goes from 0-15.
I recommend Brian Peek's Wii library: http://wiimotelib.codeplex.com

Related

Why does the Tango Point Cloud lose so much accuracy in the Y direction?

I am using the Tango Point Cloud to place 3D objects in the world, however over time the accuracy of the point cloud worsens.
When I start the app the point cloud lines up with everything correctly but after maybe 10 seconds of moving the camera around the point cloud mesh is hovering about 1-2 inches above the real world objects. It gets worse until I restart the app. Otherwise, it seems ok in the X and Z directions but it always slowly increases in the Y direction.
I found a similar question but I'm not sure it's an offset issue because it looks correct in the beginning, it just slowly gets worse over time: How to I get more reliable Y position tracking for the Google Tango in Unity?
Also, I tried going back to the Point Cloud example from the Tango github and enabling video overlay so I could compare the point cloud mesh with the real world objects, and it happens there too - the mesh slowly begins to hover above the actual objects. What is causing this and how do I fix it?
I found that this project (another example project from Google) didn't have the problem with the point cloud losing accuracy. After examining for any differences, I noticed that it doesn't have a character controller for the Tango AR camera. So, I removed the character controller from the camera in both my project and the Point Cloud example from the Tango github and this does fix the problem of bad Y accuracy. (You have to completely remove it from the camera, just un-checking it in the Unity editor doesn't fix it.)

AR objects drift issue in Google TANGO

I am trying to create a simple scene where a few objects are placed on the table. Object placement works perfect but when I move the device, the objects drift around a bit. Which at one point makes the objects placed at the corner feel like they are not on the table but floating in the air.
Even in the sun moon and earth example in Unity examples here: https://github.com/googlesamples/tango-examples-unity
The earth n moon drifts as you move the device
Is this a bug or is there any special setting which I'm missing?
The objects drift because as the Tango device moves through space, it is only tracking its own position in 3D space. For objects to remain static in a dynamic environment, the device needs to understand the position of the placed objects in 3D space and their relation to the surroundings in order to anchor the objects and reduce drift.
Luckily, TangoCore has you covered here and the 3 Core technologies of Motion Tracking, Depth Perception and Area Learning all work together to help out.
If I'm not mistaken, the Sun and Moon example is the scene "SimpleAugmentedReality" under tango-examples-unity / UnityExamples / Assets / TangoSDK / Examples / Scenes /
However if you would like to anchor the objects in 3D space and reduce drift, you'll need to use Area Learning and Depth Perception as well. Area learning performs Loop Closures as the device realises it has "seen" an area before and adjusts the path and markers to provide a more accurate device and augmented content position.
So here is what you can do to learn what you need to. Save your current scene, go to open Scene and follow this path tango-examples-unity / UnityExamples / Assets / TangoSDK / Examples / Scenes / and load up some of the other scenes to get an understanding of how the technologies intertwine.
For example, you could load up the ExperimentalMeshBuilderWithColour scene, and learn how the Depth Processing works programmatically, and then load the MotionTracking scene and learn how to access and use Motion Tracking from the TangoManager Game Object. And finally (also probably most frustratingly difficult) learn how Area Learning is managed with the AreaDescriptionManagement and AreaLearning scenes.
This will not only solve your drift issues, but also give you a much fuller understanding of the capabilities of the Tango Technology and allow you to express your ideas much easier.

Leap Motion Coordinate System, Translations and Interaction Box with Unity C#

I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.

I want help and idea about how to draw The path of GPS in C#

as you can see i want to do senior project about soccer player tracking with gps to show the path that player was using or tracking in real time
i already study about basic gps function in c# but I Really Have problems on how to draw paths on the map or picture that i want to use after we got the data from gps.
the hardware part are already finish but i get stuck in and idea for how to get the data from gps to draw path of player
I appreciate with any help on me ( sorry for bad english) Thank you very much
Link of my designed project picture :
http://image.ohozaa.com/view2/weK9gVKBzGZqRxKC
Just think about what you're doing.
GPS data (from each player) is received as a sequence of points (Latitude/Longitude?).
Convert those points to X/Y coordinates for your football field image
Use a graphics API (such as GDI / System.Drawing ) to draw lines between subsequent points
If you're using C# you might save time and trouble by using WinForms and subclassing Control and painting directly to the control's surface. You'll need to store a list of all the recent points for each player (because you'll need to constantly repaint the control).
Note that the geolocation features in .NET won't help you here unless all of your football players are going to be carrying laptops strapped to their backs. You'd want small GPS trackers attached to each player along with a small radiotransmitter that sends the data. An easy way to do this is with a commodity Bluetooth GPS unit, but I don't know if Bluetooth can support that many transceivers in such a small space, or even if the signal will reach from one end of the field to another. The most expensive way is to write a phone app and have each player carry a smartphone that sends geolocation data via a 3G or Wifi connection.
Note that GPS units tend to have a usable accuracy of about 5m (maybe 2.5m on a good day), and are useless indoors. Then consider the 5 minutes it takes for them to secure a good lock in the first place (mobile phones have quick geolocation because they use assistance from mobile phone masts). Football fields aren't very big, and even with 2.5m accuracy the data isn't going to be very useful.
In real sports they don't use GPS for this reason. Instead they use higher-precision radio units and specialist transmitter/receiver units placed around the pitch. An alternative is visual tracking, but that's an immature science (Turing help you if two players or more wearing the same team colour collapse into each other).
Looking at the picture you provided I'd say something like this is feasible with a WPF application using the Canvas control and the Line class. You'd have to convert your GPS data to (x,y)-coordinates where the origin is located at the upper left corner of the soccer field. Then you could connect subsequent points using line segments.

Detecting fingers in Kinect for windows sdk 1.5 c#

I'm now detecting all the skeleton in a wpf application, I want to know how to detect the fingers to appear with the skeleton? I'm using microsoft Kinect for windows sdk ver 1.5
Many thanks
The Kinect unfortunately is not sensitive enough to recognize fingers so the library will not provide that as part of the skeleton. Maybe the Kinect 2.0 rumored to come out with the Xbox 720 will be able to provide that level of detail.
Candescent NUI might be what you're looking for. As OpenUserX03 said, however, the Kinect isn't ideal for this task. Perhabs you should have a look at the coming-up LEAP technology, which specializes in finger detection.
The cameras on the Kinect are not meant to be able to do joint tracking for the hands to that level of detail. Tracking the individual fingers is possible but wont be very reliable. To represent a players hand in the skeleton, you can check if the players hand is opened or closed. A possible way to see if the hand is open or closed would be to do pixel checks in an area surrounding the hand. This way with some tuning you could calculate how much of that area is the hand (using the depth and color stream) and how much is not. For example: If 40% of that area is the same depth as the hand joint, the hand is closed in a fist. If 70% of that area is the same depth as the hand joint, the hand is open. Then you could possibly use the angle of the elbow and wrist joint to be able to represent a closed or open hand at that angle on the skeleton.

Categories