I am trying to track several joints at the same time using the Kinect SDK and C# sample of code from the Channel9.msdn website. I am following the same example code they have there, but I am unable to have one of my image representations move. The two ellipses I have that represent the hands are able to track my movements, but the headImage that represents the head joint automatically moves to the top left corner of the window and doesn't move. If i change the joint to be tracked to be another joint, such as one of the ones represented by the ellipses (which I know is tracking), the headImage still goes to the top left corner of the window. How is it I can track the hand joints using the ellipses, which follow my movements, but the headImage image does not move no matter what joint I set it to?
Update: It seems that when I remove the image object from the .xaml window and replace it with another ellipse object, all the ellipses start moving, which means the ellipse representing the joint that did not move before, is moving and is able to track. It must be a problem with using that particular image object (it is the head image that is the same chosen for the Channel9.msdn tutorial).
In the official Microsoft Kinect for Windows SDK v1.6 Toolkit examples, have a look at the SkeletonBasics project. It shows you have to track the entire skeleton and draw each of the joints, along with connecting lines. Just remove what you don't want.
I sugest look at the Toolkit examples mentioned by #Evil Closet Monkey and perhaps look to the examples provided with kinect.toolbox (Kinect Toolbox page) the examples provided here, are too simple and good.
Related
I want to make a 2d coordinate graph where you can put coordinates on it. The graph can be zoomed in and out, and panned in all directions. I'm thinking of making the screen kind of like a camera, looking down on the graph. Simply moving the camera lets you see different parts of the graph. Is that possible to do? If so, how? Thanks!
I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.
I am developing a C# application and I am drawing two components programmatically. However, I would like to make them connected (using line) so that when I move one component, I can change the coordinates of another object.
For example: Suppose there are two squares on Canvas and they are connected using a line. When a move one square, I also want to change the coordinates of another square. So, another square will also seem like moving relative to first one.
I would really appreciate if someone could help me out in second part. I think it can be done using dependency properties but I wasn't able to find an example.
I'm new with EMGU and image processing and I have a project in C# that needs to detect a transparent object, specifically, a moth's wing inside a plastic bottle. Here are some examples.
I tried using YCbCr in EMGU but I can not detect it nor differentiate it from the background.
Another thing is that I tried to enclose it in a "controlled environment" (inside a box where no light can come in) and used LED back-light. Is this advisable? Or can light from the environment (fluorescent light) will do? Will this affect the detection rate? Do lighting play a factor in this kind of problem?
This is the idea of my project and what I use. Basically, my project is just a proof of concept about detecting a transparent object from an image using a webcam (Logitech C910). This is an example of an old industrial problem here in our country when bottling plant over stock their plastic bottle and it got contaminated before use. Moth body and moth wing are the contaminants that were given to us. Also, this is to see if a webcam can suffice as an alternative to an industrial camera for this application.
I place it inside a controlled environment and use LED lights as backlight (this is just made using a prototyping board and high intensity LED light that is diffused with a bond paper). The object (moth wing) will be placed inside a plastic bottle with water and will be tested into 2 parts. The first part is that the bottle is not moving and the second part is when the bottle is moved on a conveyor but at the same controlled environment. I did all the hardware required so that is not an issue anymore. The moth body is manageable (I think) to detect but the moth wing left me scratching my head.
Any help would be very much appreciated. Thank you in advance!
Consider using as many visual cues as possible:
blur/focus
shape - you can use active contour or findControus() on a clean image
location, intensity, and texture in grabcut framework
you can try IR illumination in case moth and glass react to it differently
You should try to adjust brightness/contrast and color balance.
Another idea is to use auto threshold such as Sauvola or auto local thresholds. It will give you interesting results such as this one (I directly convert the image to grayscale) :
I do this tests very quickly by using imageJ.
Click to the link to the image in order to see which image correspond to which binarization algorithm.
I am new to C# programming and I am following the Kinect For Windows SDK from Channel9msdn and Microsoft Kinect Toolkit examples. My question is how do I obtain and display the joint angles for each joint?
Also, how do I display the coordinates of the joints?
I want to have these displayed when I am running the skeletal tracking.
Thanks in advance for any help.
A quick internet search...
How to calculate an angle from three points?
For displaying the text, you can either draw it directly to the display canvas or create one (or more) TextBlocks that you update the value of. I would probably suggest the TextBlocks, simply because they would be easier to work with.
Information on how to work with TextBlocks can be found in the MSDN development documentation, and all around.