I want to calculate depth of an image. So that i can eliminate far objects from the image.
Is there any methods to do so in c# with single camera??
This website shows how to get a webcam image using C#. However, just like a photo, it is flat so there is no way to distinguish objects at different distances from the camera. In general, with just one camera and a single photo/image, what you want is impossible.
With one or two cameras that snap two images/photos with some distance in between, you can distinguish depth (just like you do using your two eyes). However, this requires very complex mathematics to first identify the objects and second determine their approximate distance from the camera.
Kinect uses an infrared camera that creates a low-resolution image to measure the distance to objects in front of the camera, so that it can distinguish the player from the background. I read somewhere that Kinect cameras can be attached to a normal computer, but I don't know about the software or mathematics you'll need.
If you illuminate a straight line with a laser at an angle to the scene, the displacement of the line will correspond exactly to the height of the object. This only gives the height along a single line, subject to the resolution of your camera. If you need a complete 3D scan you'll need to move the laser and take multiple pictures.
a c# reference would be needed for each frame as the streaming video file comes in. at the start of the streaming the subject will need to turn their head and spin so a series of measurements can be captured from the subject. this then could be feed to a second camera like unity 3d a virtual camera that transposes a 3d image over the top of the streamed image. there are a lot of mobile phone apps that can capture 3d objects with a series of still frames i had one on my galaxy s6 also the galaxy s6 and up has a depth chip in their cameras that they sold to i phone this is used on the apple 3d camera. i have been thinking about how to do this also would love to email you about it. to note it would be a similar concept to facial recognition software.
Related
I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.
Hi i am writing a code at C#.I use a Kinect SDK V2 and a robot, for example my robot model is an IRB 1600.From the Kinect sensor i take a human point cloud when a human is detecting from the camera and from the robot i take one position(X-Y-Z position) that tells me where the robot is every time i ask it.So my problem is that the camera and robot have got different coordinate systems the sensor have the sender of the camera and the robot has his bases.I want to create a same coordinate system between them for distance calculation between human and robot.Are any methods to do that?tutorials?
Thank you
I think you just need to make a calibration...
Somewhere you need to say that your 3D coordinate system starts at (0,0,0) and then re-convert all the positions based on this coordinate system. Shouldn't be hard to implement.
Nevertheless, if you have both 3D positions of the two sensors, a simple calculation in 3D gives you the distance.
Distance=sqrt(X2-X1)^2+(Y2-Y1)^2+(Z2-Z1)^2)
There are tools like digital caliper, linear encoders that measures lengths. the are basically rulers and they use magnetic strips. they simply count magnetic stripes and calculated the distance.
I want to make it optical by using webcam instead of using magnetic stripes.
Imagine a ruler with webcam pointed on it. dont think about numbers or anything. just imagine ruler with vertical lines. webcam placed to this ruler very closely and image is very clear.
as webcam moves, vertical lines will travel across video.
what I want to do is count these lines at the center of the video so I can calculate how far we go.
I googled every word I can think of to find a stating point I had no luck.
any suggestions please ? I need to use windows platform for this project. c# or vbnet are wellcome.
best...
Try this; it's not been updated in a while but the concepts are all there.
http://reprap.org/wiki/CamRap
as you can see i want to do senior project about soccer player tracking with gps to show the path that player was using or tracking in real time
i already study about basic gps function in c# but I Really Have problems on how to draw paths on the map or picture that i want to use after we got the data from gps.
the hardware part are already finish but i get stuck in and idea for how to get the data from gps to draw path of player
I appreciate with any help on me ( sorry for bad english) Thank you very much
Link of my designed project picture :
http://image.ohozaa.com/view2/weK9gVKBzGZqRxKC
Just think about what you're doing.
GPS data (from each player) is received as a sequence of points (Latitude/Longitude?).
Convert those points to X/Y coordinates for your football field image
Use a graphics API (such as GDI / System.Drawing ) to draw lines between subsequent points
If you're using C# you might save time and trouble by using WinForms and subclassing Control and painting directly to the control's surface. You'll need to store a list of all the recent points for each player (because you'll need to constantly repaint the control).
Note that the geolocation features in .NET won't help you here unless all of your football players are going to be carrying laptops strapped to their backs. You'd want small GPS trackers attached to each player along with a small radiotransmitter that sends the data. An easy way to do this is with a commodity Bluetooth GPS unit, but I don't know if Bluetooth can support that many transceivers in such a small space, or even if the signal will reach from one end of the field to another. The most expensive way is to write a phone app and have each player carry a smartphone that sends geolocation data via a 3G or Wifi connection.
Note that GPS units tend to have a usable accuracy of about 5m (maybe 2.5m on a good day), and are useless indoors. Then consider the 5 minutes it takes for them to secure a good lock in the first place (mobile phones have quick geolocation because they use assistance from mobile phone masts). Football fields aren't very big, and even with 2.5m accuracy the data isn't going to be very useful.
In real sports they don't use GPS for this reason. Instead they use higher-precision radio units and specialist transmitter/receiver units placed around the pitch. An alternative is visual tracking, but that's an immature science (Turing help you if two players or more wearing the same team colour collapse into each other).
Looking at the picture you provided I'd say something like this is feasible with a WPF application using the Canvas control and the Line class. You'd have to convert your GPS data to (x,y)-coordinates where the origin is located at the upper left corner of the soccer field. Then you could connect subsequent points using line segments.
Hello
I saw that there are some laptops with 3D support. I know that they use polarization for each eye. How can I write a program in C# that shows a simple 3D object in such system? I don't want to show a 3D object in a 2 D medium (Perspective view), but showing a 3D object similar to what you can see in a 3D film using a 3D glass.
Any suggestion for further study is highly appreciated.
Regards
What you need to do is display two images one for each eye. Each image is a perspective view but taken from two slightly different viewpoints - about the distance of your eyes apart.
When viewed through polarising or more likely LCD Shutter glasses you get the illusion of 3D objects.
In this case each eye's view is presented on the screen alternately and a signal is sent to the glasses to become clear or opaque so that the correct image is seen in each eye.
For a passive system you have to use two projectors for the left and right eye images and make sure that they are perfectly aligned so the images overlap correctly. If you get it wrong you won't get a very good 3D effect.
In both cases you need to create two views of your model and render each one for each frame you display. I used to work in this area and a while back wrote a blog post which included an overview on how we did stereo systems.
I think that you need to program directly using OpenGL or Direct3D. For the screen to display the polarized views necessary to achieve the 3D effect, the graphics card will need to know what it has to display. See here for some ideas.