There is a coordinate origin, for example from a robot system in a corner of a room in the real world. From this, I get coordinates with the origin in the corner that I want to use for a hologram that I want to display on HoloLens2. However, the HoloLens has its own coordinate origin when an app is launched. For example, in the middle of the room when I'm standing there.
So I can't use the coordinates with the origin in the corner directly for the HoloLens because the coordinate origins of the systems are different.
Can I merge the coordinate origin of the HoloLens with the coordinate origin in the corner?
How can I use the coordinates with the coordinate origin in the corner for HoloLens so that the hologram is displayed in the correct position?
Are there any tools that implement this?
I am using MRTK 2.8, Unity 2020.3.4.
Thanks for your help
Related
Is it possible to get the radius a screen corner of an Android device has?
Phone screens don't have sharp edges, but I don't know how to adapt to the different edge types of different android phones...
Would be great if someone knows a c# .net solution!
I want to make a 2d coordinate graph where you can put coordinates on it. The graph can be zoomed in and out, and panned in all directions. I'm thinking of making the screen kind of like a camera, looking down on the graph. Simply moving the camera lets you see different parts of the graph. Is that possible to do? If so, how? Thanks!
Hi i am writing a code at C#.I use a Kinect SDK V2 and a robot, for example my robot model is an IRB 1600.From the Kinect sensor i take a human point cloud when a human is detecting from the camera and from the robot i take one position(X-Y-Z position) that tells me where the robot is every time i ask it.So my problem is that the camera and robot have got different coordinate systems the sensor have the sender of the camera and the robot has his bases.I want to create a same coordinate system between them for distance calculation between human and robot.Are any methods to do that?tutorials?
Thank you
I think you just need to make a calibration...
Somewhere you need to say that your 3D coordinate system starts at (0,0,0) and then re-convert all the positions based on this coordinate system. Shouldn't be hard to implement.
Nevertheless, if you have both 3D positions of the two sensors, a simple calculation in 3D gives you the distance.
Distance=sqrt(X2-X1)^2+(Y2-Y1)^2+(Z2-Z1)^2)
i used kinect sdk, c# language to develop game
first i want to take some measurements
i need help to find code which can find angle of rotation of right shoulder around z-axis?
You can use BoneOrientation class of the SDK and remove the rotations with y and x leaving the rotations of z. you can pick right shoulder to get the orientation and remove other places of the matrix so that it only represent the z rotation. If you do not know how to do this please reply. I assume that you have your hands wet on this.
cheers.
I want to calculate depth of an image. So that i can eliminate far objects from the image.
Is there any methods to do so in c# with single camera??
This website shows how to get a webcam image using C#. However, just like a photo, it is flat so there is no way to distinguish objects at different distances from the camera. In general, with just one camera and a single photo/image, what you want is impossible.
With one or two cameras that snap two images/photos with some distance in between, you can distinguish depth (just like you do using your two eyes). However, this requires very complex mathematics to first identify the objects and second determine their approximate distance from the camera.
Kinect uses an infrared camera that creates a low-resolution image to measure the distance to objects in front of the camera, so that it can distinguish the player from the background. I read somewhere that Kinect cameras can be attached to a normal computer, but I don't know about the software or mathematics you'll need.
If you illuminate a straight line with a laser at an angle to the scene, the displacement of the line will correspond exactly to the height of the object. This only gives the height along a single line, subject to the resolution of your camera. If you need a complete 3D scan you'll need to move the laser and take multiple pictures.
a c# reference would be needed for each frame as the streaming video file comes in. at the start of the streaming the subject will need to turn their head and spin so a series of measurements can be captured from the subject. this then could be feed to a second camera like unity 3d a virtual camera that transposes a 3d image over the top of the streamed image. there are a lot of mobile phone apps that can capture 3d objects with a series of still frames i had one on my galaxy s6 also the galaxy s6 and up has a depth chip in their cameras that they sold to i phone this is used on the apple 3d camera. i have been thinking about how to do this also would love to email you about it. to note it would be a similar concept to facial recognition software.