Hi i am writing a code at C#.I use a Kinect SDK V2 and a robot, for example my robot model is an IRB 1600.From the Kinect sensor i take a human point cloud when a human is detecting from the camera and from the robot i take one position(X-Y-Z position) that tells me where the robot is every time i ask it.So my problem is that the camera and robot have got different coordinate systems the sensor have the sender of the camera and the robot has his bases.I want to create a same coordinate system between them for distance calculation between human and robot.Are any methods to do that?tutorials?
Thank you
I think you just need to make a calibration...
Somewhere you need to say that your 3D coordinate system starts at (0,0,0) and then re-convert all the positions based on this coordinate system. Shouldn't be hard to implement.
Nevertheless, if you have both 3D positions of the two sensors, a simple calculation in 3D gives you the distance.
Distance=sqrt(X2-X1)^2+(Y2-Y1)^2+(Z2-Z1)^2)
Related
I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.
I have a set of places of interest (addresses with their GPS coordinates) and would love to know which ones are in the direction I am facing.
That is, I would like to know (in real-time) which places from the list one could "see" if they looked in that direction. It will be based on a compass direction on a mobile device and the GPS coordinates of that device.
The solution will be a Xamarin mobile application in C#, but I would first like to know what the concept is called and how one would tackle such a challenge if they can get the current GPS coordinates and the direction.
I want to calculate depth of an image. So that i can eliminate far objects from the image.
Is there any methods to do so in c# with single camera??
This website shows how to get a webcam image using C#. However, just like a photo, it is flat so there is no way to distinguish objects at different distances from the camera. In general, with just one camera and a single photo/image, what you want is impossible.
With one or two cameras that snap two images/photos with some distance in between, you can distinguish depth (just like you do using your two eyes). However, this requires very complex mathematics to first identify the objects and second determine their approximate distance from the camera.
Kinect uses an infrared camera that creates a low-resolution image to measure the distance to objects in front of the camera, so that it can distinguish the player from the background. I read somewhere that Kinect cameras can be attached to a normal computer, but I don't know about the software or mathematics you'll need.
If you illuminate a straight line with a laser at an angle to the scene, the displacement of the line will correspond exactly to the height of the object. This only gives the height along a single line, subject to the resolution of your camera. If you need a complete 3D scan you'll need to move the laser and take multiple pictures.
a c# reference would be needed for each frame as the streaming video file comes in. at the start of the streaming the subject will need to turn their head and spin so a series of measurements can be captured from the subject. this then could be feed to a second camera like unity 3d a virtual camera that transposes a 3d image over the top of the streamed image. there are a lot of mobile phone apps that can capture 3d objects with a series of still frames i had one on my galaxy s6 also the galaxy s6 and up has a depth chip in their cameras that they sold to i phone this is used on the apple 3d camera. i have been thinking about how to do this also would love to email you about it. to note it would be a similar concept to facial recognition software.
So, I want to check collision between my player and the tiles. The tile isn't one big object, but the tile is 32*32 pixels size and there are like 11 of them, used as floor so the player will be able to walk on it.
My question is, how am I going to detect it?
Pixel Collision doesn't sound very effective.
If I should use rectangle collision, I'd like to get an explanation how am I going to implement it into my code.
Thanks alot.
I suggest downloading and learning the Platformer Starter Kit, developed by Microsoft.
Download: Starter Kit Download
MSDN discussion Starter Kit Discussion
The simplest explanation for their solution is that tiles are kept in a 2D array to represent the world. When the player's Update() function is called a HandleCollisions() function is called that loops through a subset of the tile array to look for possible collisions. For every possible collision with the player the depth of intersection with the player bounds and the tile, the players position is adjusted to bring it out of the tile.
I'm working with the C# managed wiimote library for a little fun project I'm working on, But I'm having trouble finding a good tutorial on how to calculate how far the wiimote is from the monitor (i.e sensor bar). I want to create a zoom effect where an object will grow or shrink based on how far back you move the wiimote from the screen.
Can anyone help me with this?
The system is not calibrated, so it isn't able to tell you the actual distance. However it can tell you the relative distance. The IR sensor on the remote works by telling you the size and location of up the IR light sources on the sensor bar. When the remote moves farther away, the lights get smaller and closer together; when the remote gets closer, the lights get larger and farther away from each other. I would use the distance between the lights, as the size of the dots only goes from 0-15.
I recommend Brian Peek's Wii library: http://wiimotelib.codeplex.com