Visualize x-y-z position - c#

I use a Kinect SDK V2 sensor to capture point cloud.The output of my program is a double array that contains X-Y-Z position points for every frame(0,0,0 is the center of the sensor). Now i would like to visualize the data for example for one frame. Is any fast/easy way just to visualize the data?Any .dll?
P.S:I tried with Unity but it was to difficult and time consuming for me.I thought that code does not need it because question it too general(if code would be helpful i could upload parts).
Thank you

Related

How to access the HoloLens2 Origin SpatialCoordinateSystem

The problem:
I've been working on trying to create a 3D position in Worldspace based on a 2D face RGB face detection, similar to this Microsoft example. I am using Unity 2020.3.16f1, MRTK 2.8.2 and C# for the Unity scripts. I have been able to convert the C++ code shown in the link to C# with a lot of success. One final issue is accessing the HoloLens 2 origin SpatialCoordinateSystem to be used in the Transform between the camera's 2D coordinate system and the 3D worldspace system.
The SO question at this link asks a very similar question, and I have tried to use SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation().CoordinateSystem() as the answers suggest. I call this function in Unity's "Awake" method, to ensure it is set as early as possible, as shown below.
private void Awake()
{
worldSpatialCoordinateSystem = SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation().CoordinateSystem;
}
Problem is that, if the user's headset is moving while the application starts, I notice an offset in the 3D locations commensurate with the direction/position of the head when the application was starting. I have narrowed the problem down to the fact that the HL2 and Unity set an origin SpatialCoordinateSystem just before the function in Awake is called, accounting for the offset between what I expect and what I see.
What I've tried:
I have tried using some of the other solutions listed here as well. I cannot use UnityEngine.Windows.WebCam.PhotoCapture becuase of the way I am create still image captures, and (SpatialCoordinateSystem)Marshal.GetObjectForIUnknown(WorldManager.GetNativeISpatialCoordinateSystemPtr()) appears to be deprecated and unusable. Finally, I tried CreateStationaryFrameOfReferenceAtCurrentLocation(Vector3, Quaternion), and used the inverse of the current Camera.main position and rotation, hoping to compensate for the offset, but it did not appear to work (NumericsConversionExtensions is the UnityEngine-to-System.Numerics converver found here). That code is below.
worldSpatialCoordinateSystem = SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation(NumericsConversionExtensions.ToSystem(Camera.main.transform.position*-1),
NumericsConversionExtensions.ToSystem(Quaternion.Inverse(Camera.main.transform.rotation))).CoordinateSystem;
My question:
Is there either another way to access the origin spatial coordinates or possibly to compensate for the offset when the user is moving their head before Awake is called?
I spent 3 days working on the solution, and found one 1 hour after asking SO. For those who come here, use the code below, originally found here.
using Microsoft.MixedReality.OpenXR;
worldSpatialCoordinateSystem = PerceptionInterop.GetSceneCoordinateSystem(Pose.identity) as SpatialCoordinateSystem;

Is it possible to create isosurfaces using Oxyplot?

I'm using Oxyplot HeatMapSeries for representing some graphical data.
For a new application I need to represent the data with isosurfaces, something looking like this:
Some ideas around this:
I know the ContourSeries can do the isolines, but I can't find any option that allows me to fill the gaps between lines. Does this option exists?
I know the HeatMapSeries can be shown under the contourSeries so I can get a similar result but it does not fit our needs. .
Another option wolud be limiting the HeatMapSeries colours and eliminate the interpolation. Is this possible?
If anyone has another approach to the solution I will hear it!
Thanks in advance!
I'm evaluating whether Oxyplot will meet my needs and this question interests me... from looking at the ContourSeries source code, it appears to be only for finding and rendering the contour lines, but not filling the area between the lines. Looking at AreaSeries, I don't think you could just feed it contours because it is expecting two sets of points which when the ends are connected create a simple closed polygon. The best guess I have is "rasterizing" your data so that you round each data point to the nearest contour level, then plot the heatmap of that rasterized data under the contour. The ContourSeries appears to calculate a level step that does 20 levels across the data by default.
My shortcut for doing the rasterizing based on a step value is to divide the data by the level step you want, then truncate the number with Math.Floor.
Looking at HeatMapSeries, it looks like you can possibly try to turn interpolation off, use a HeatMapRenderMethod.Rectangles render method, or supply a LinearColorAxis with fewer steps and let the rendering do the rasterization perhaps? The Palettes available for a LinearColorAxis can be seen in the OxyPalettes source: BlueWhiteRed31, Hot64, Hue64, BlackWhiteRed, BlueWhiteRed, Cool, Gray, Hot, Hue, HueDistinct, Jet, and Rainbow.
I'm not currently in a position to run OxyPlot to test things, but I figured I would share what I could glean from the source code and limited documentation.

How to Extract Front Shape in an Image in C#

Is it possible to Extract any Shape that's in front of an image?
let's say we have an image of two objects 1 in front, the other is behind and a blank or transparent background.
can we extract the one in front and place it in a new image?
can this be done by detecting edge of frontal shape and then crop it?
This article is doing something near to my question :
Cropping Particular Region In Image Using C#
but i want to do it fully automated.
any help would be highly appreciated.
Thanks in advance.
I think you cannot do this fully automated; however, there are maybe some semi-automated ways, at least, you need some prior information such as how far your object can be placed. Here are some of my suggestions.
First way(you have experience in implementing academic papers, you have some prior information about depth of object's place),
Download a "scene - depth images database" from internet
Get the average value of database
Query K-Nearest-Neighbors of input image according to GISt of scene[1]
Apply SIFT flow to align database scenes according to input scene
Infer the depth
Remove a certain range from image.
It's possible to infer rough depth map of an input image. By using this, you'll try to infer depth map of input image then remove the certain depth range which includes your object. You can check the paper[2] for more detailed explanation.
Example Depth Map from http://www.the.me/wp-content/uploads/2013/09/z-depth_map_expanding_exif_more_powerful_post-processing_2n.jpg
Second way(assumption: human input is allowed at the end of algorithm.),
- Segment the image(you can find state of the art algorithm with a little search)
- Select the contour that you want to remove.
Example Segmented Image from http://vision.ece.ucsb.edu/segmentation/edgeflow/images/garden_edge.gif
References:
[1]Aude Oliva
Gist of the Scene
[2]Karsch, K.; Liu, C.; Kang, S.B.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2014.
OpenCV gives option for to extract contours of the objects. So convert your image to gray scale and given to the open CV , detect all the contours in your images. and from that select contours which will be required for your requirement.
Since your project is on C# you can take a look of Emgu CV which is a cross platform .Net wrapper for OpenCV. Pl. refer to the below url where you can download the examples for Emgu CV.
http://sourceforge.net/projects/emguexample/?source=recommended

Calculating the Area of the Hand using Depth Data

Using a Kinect sensor, I am attempting to write an algorithm to detect a clenched fist. I am trying to achieve this by calculating the area occupied by the hand (since clenched fist area < non-clenched fist area).
Here's what I have so far:
Depth information per pixel (from the DepthStream)
Location of the Hand (from the SkeletonStream)
I am having trouble figuring out how to get the depth data that corresponds to the hand. It's easy to get the depth data at the exact location that the Kinect gives for the hand, but I don't know how to get all the depth data for the hand. Any suggestions, pseudocode, and/or link to tutorials would help.
There are events from KinectInteractions that detect whenever the fist is in gripped mode or released mode as it's used for the KinectScrollViewer in the KinectRegion:
The HandPointGrip event
The HandPointGripReleased event
Also this might be a duplicate of this post .
I am not experienced in this field but found this in my search. See if it helps you: https://groups.google.com/forum/#!topic/openni-dev/Kj2JL6K0PBw

Kinect input to Unity3d output

I'm creating the Game using "Kinect xbox 360" and Unity3d. I have created the link between Kinect and Unity so that the character in Unity3d do the same actions[Movements] that what we do from Kinect sensor.
Now my question is how to take that actions[Movements] as input for UNITY3d?
For Example: If I do up my Right hand the character will walk or run or jump whatever it's.
How can I check that Hand is Moved up in Script [ UnityScript or C# ]
For Example :
if(GameObject.transform == "RightHand")
{
------------------------
------------------------
}
I searched out all sites through google but No result....
I have connected the Kinect through ZIGFU with Unity3d.
Lets see which Adroit will answer for this . . .!
Well, that heavily depends on the gesture you want to check for. There also is an SDK that comes with some gestures to recognize. Have a look at the code: https://www.assetstore.unity3d.com/#/content/7747
An example: You want to check for a hand being held up. Get the object of a handbone and check if it stays over a certain value of the z axis for a couple of milliseconds or seconds.

Categories