Store Kinect's v2.0 Motion to BVH File - c#

I would like to store the motion capture data from Kinect 2 as a BVH file. I found code which does so for Kinect 1 which can be found here. I went through the code and found several things that I was not able to understand.
For example, in the mentioned code I've tried to understand what exactly the Skeleton skel object, found in several places in the code, actually is. If not, are there any known application available to accomplish the intended?
EDIT: I tried to change Skeleton skel to Body skel which I think is the correspondant object for kinect SDK 2.0. However I've got an error when I try to get the position of the body:
tempMotionVektor[0] = -Math.Round( skel.Position.X * 100,2);
tempMotionVektor[1] = Math.Round( skel.Position.Y * 100,2) + 120;
tempMotionVektor[2] = 300 - Math.Round( skel.Position.Z * 100,2);
I've gotten errors when calling the function Position for the Body skel. How can I retrieve the X, Y, Z of the skeleton in sdk 2.0?? I tried to change the above three lines to:
tempMotionVektor[0] = -Math.Round(skel.Joints[0].Position.X * 100, 2);
tempMotionVektor[1] = Math.Round(skel.Joints[0].Position.Y * 100, 2) + 120;
tempMotionVektor[2] = 300 - Math.Round(skel.Joints[0].Position.Z * 100, 2);
EDIT: Basically I managed to store the a bvh file after combining bodyBasicsWPF and kinect2bvh. However, it seems that the skeleton I am storing is not efficient. There are strange movements in the elbows. I am trying to understand if I have to change something in the file kinectSkeletonBVH.cp. More specifically, what are the changes in the joint axis orientation for the kinect 2 version. How can I change the following line: skel.BoneOrientations[JointType.ShoulderCenter].AbsoluteRotation.Quaternion; I tried to change that line with skel.JointOrientations[JointType.ShoulderCenter].Orientation. Am I right? I am using the following code to add the joint to BVHBone objects:
BVHBone hipCenter = new BVHBone(null, JointType.SpineBase.ToString(), 6, TransAxis.None, true);
BVHBone hipCenter2 = new BVHBone(hipCenter, "HipCenter2", 3, TransAxis.Y, false);
BVHBone spine = new BVHBone(hipCenter2, JointType.SpineMid.ToString(), 3, TransAxis.Y, true);
BVHBone shoulderCenter = new BVHBone(spine, JointType.SpineShoulder.ToString(), 3, TransAxis.Y, true);
BVHBone collarLeft = new BVHBone(shoulderCenter, "CollarLeft", 3, TransAxis.X, false);
BVHBone shoulderLeft = new BVHBone(collarLeft, JointType.ShoulderLeft.ToString(), 3, TransAxis.X, true);
BVHBone elbowLeft = new BVHBone(shoulderLeft, JointType.ElbowLeft.ToString(), 3, TransAxis.X, true);
BVHBone wristLeft = new BVHBone(elbowLeft, JointType.WristLeft.ToString(), 3, TransAxis.X, true);
BVHBone handLeft = new BVHBone(wristLeft, JointType.HandLeft.ToString(), 0, TransAxis.X, true);
BVHBone neck = new BVHBone(shoulderCenter, "Neck", 3, TransAxis.Y, false);
BVHBone head = new BVHBone(neck, JointType.Head.ToString(), 3, TransAxis.Y, true);
BVHBone headtop = new BVHBone(head, "Headtop", 0, TransAxis.None, false);
I can't understand where inside the code the axis for every Joint is calculated.

The code you used for Kinect 1.0 to obtain a BVH file use the joints information to build bone vectors by reading the Skeleton.
public static double[] getBoneVectorOutofJointPosition(BVHBone bvhBone, Skeleton skel)
{
double[] boneVector = new double[3] { 0, 0, 0 };
double[] boneVectorParent = new double[3] { 0, 0, 0 };
string boneName = bvhBone.Name;
JointType Joint;
if (bvhBone.Root == true)
{
boneVector = new double[3] { 0, 0, 0 };
}
else
{
if (bvhBone.IsKinectJoint == true)
{
Joint = KinectSkeletonBVH.String2JointType(boneName);
boneVector[0] = skel.Joints[Joint].Position.X;
boneVector[1] = skel.Joints[Joint].Position.Y;
boneVector[2] = skel.Joints[Joint].Position.Z;
..
Source: Nguyên Lê Đặng - Kinect2BVH.V2
Except in Kinect 2.0, Skeleton class has been replaced by the Body class, so you need to change it to deal with a Body instead, and obtain the joints by following the steps quoted below.
// Kinect namespace
using Microsoft.Kinect;
// ...
// Kinect sensor and Kinect stream reader objects
KinectSensor _sensor;
MultiSourceFrameReader _reader;
IList<Body> _bodies;
// Kinect sensor initialization
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
_sensor.Open();
}
We also added a list of bodies, where all of the body/skeleton related
data will be saved. If you have developed for Kinect version 1, you
notice that the Skeleton class has been replaced by the Body class.
Remember the MultiSourceFrameReader? This class gives us access on
every stream, including the body stream! We simply need to let the
sensor know that we need body tracking functionality by adding an
additional parameter when initializing the reader:
_reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color |
FrameSourceTypes.Depth |
FrameSourceTypes.Infrared |
FrameSourceTypes.Body);
_reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;
The Reader_MultiSourceFrameArrived method will be called whenever a
new frame is available. Let’s specify what will happen in terms of the
body data:
Get a reference to the body frame
Check whether the body frame is null – this is crucial
Initialize the _bodies list
Call the GetAndRefreshBodyData method, so as to copy the body data into the list
Loop through the list of bodies and do awesome stuff!
Always remember to check for null values. Kinect provides you with
approximately 30 frames per second – anything could be null or
missing! Here is the code so far:
void Reader_MultiSourceFrameArrived(object sender,
MultiSourceFrameArrivedEventArgs e)
{
var reference = e.FrameReference.AcquireFrame();
// Color
// ...
// Depth
// ...
// Infrared
// ...
// Body
using (var frame = reference.BodyFrameReference.AcquireFrame())
{
if (frame != null)
{
_bodies = new Body[frame.BodyFrameSource.BodyCount];
frame.GetAndRefreshBodyData(_bodies);
foreach (var body in _bodies)
{
if (body != null)
{
// Do something with the body...
}
}
}
}
}
This is it! We now have access to the bodies Kinect identifies. Next
step is to display the skeleton information on-screen. Each body
consists of 25 joints. The sensor provides us with the position (X, Y,
Z) and the rotation information for each one of them. Moreover, Kinect
lets us know whether the joints are tracked, hypothsized or not
tracked. It’s a good practice to check whether a body is tracked
before performing any critical functions.
The following code illustrates how we can access the different body
joints:
if (body != null)
{
if (body.IsTracked)
{
Joint head = body.Joints[JointType.Head];
float x = head.Position.X;
float y = head.Position.Y;
float z = head.Position.Z;
// Draw the joints...
}
}
Source: Vangos Pterneas Blog - KINECT FOR WINDOWS VERSION 2: BODY TRACKING

Related

Unity: CopyTexture to External texture2D

I need to expose a Unity texture/Rendertexture to some native plugin, which requires the "D3D11_RESOURCE_MISC_SHARED" flag on the texture.
textures created by unity doesn't have this flag, so, I created it from the plugin side, and then created a reference texture within unity using CreateExternalTexture, and copied the contents to this native texture using Graphics.CopyTexture.
the 2 textures have the same dimension, same size, same format, and same mipCount(0)
the problem is, when I show it in unity (for debugging purpose), I can see nothing and no error occurs.
btw, if I copy by ReadPixel, an error occures :
ReadPixels called on undefined image 0 (valid values are 0 - -1
if I create the texture using unity api, CopyTexture succeeds and result can be seen. but then, I lose the "D3D11_RESOURCE_MISC_SHARED" flag.
so, maybe the texture I created is not valid?
my code:
D3D11_TEXTURE2D_DESC desc = { 0 };
desc.Width = width;
desc.Height = height;
desc.MipLevels = 0;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;//这里,格式能不能调整?比如,A8是需要的吗?
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;//普通资源
//desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;//应该不需要cpu访问的,不是read也不是write
desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;//for "OpenSharedHandle" d3d11 api
HRESULT hr = E_FAIL;
if (FAILED(hr = pDevice->CreateTexture2D(&desc, nullptr, &g_unityEquirectTexture)))
{
Log(" Create Shared Texture Failed!");
return NULL;
}
Log("CreateSharedTexture success");
//return g_unityEquirectTexture;
Unity CopyTexture Code:
if (output == null)
{
Debug.Log($"limit = {QualitySettings.masterTextureLimit}");
//output = new Texture2D(equirect.width, equirect.height, TextureFormat.RGBA32,false);//uncomment this line and then copyTexture below succeeds
IntPtr externalTextureData = CGDKInterOp.cgdk_c_CreateExternalTexture(equirectLeft.GetNativeTexturePtr(), equirectLeft.width * 2, equirectLeft.height);
if (externalTextureData != IntPtr.Zero)
{
output = Texture2D.CreateExternalTexture(equirectLeft.width * 2, equirectLeft.height, TextureFormat.RGBA32, false, true, externalTextureData);
}
}
if (output == null)
{
Debug.LogError("create texture from external failed!");
return;
}
//RenderTexture.active = equirect;
//output.ReadPixels(new Rect(0, 0, equirect.width, equirect.height), 0, 0);
//RenderTexture.active = null;
Graphics.CopyTexture(equirect, output);
OK, solved by myself.
the problem is: Miplevel == 0.this causes d3d11 creating a texture with 0B memory allocated!
change Miplevel to 1 solved the problem
note:
In unity inspector, we can see the memory allocated by textures. I found that my texture has 0B memory from the inspector and then, I search using this clue and found the solution
There might be a better solution for this but the last time I needed to copy over texture data from a native side to managed (Unity), I did it through marshalling of the data.
You essentially just need to expose a method in the native plugin to pass you the texture data as an array and then have a method in the C# code to fetch it data (by calling the method), releasing the pointer to native memory when you're done. You can find information about marshalling and interop in Microsoft's documentation (e.g. https://learn.microsoft.com/en-us/dotnet/framework/interop/marshalling-different-types-of-arrays).
If your texture is always guaranteed to be of the same size and format, it's easier - but if you need to know some additional parameters so that you know how the Texture should be represented in managed-land, you can always pass yourself the additional data through the same method.

Guide to use Yolo with gpu c#

I've been trying to find a tutorial or something on how to make yolo c# use gpu instead of cpu, I always find that it says that it works on both cpu and gpu but no one ever says how to use the gpu since it always uses cpu for me. Here's my code with yolo v5 c#. It doesn't really matter for me if it uses yolo v5 just that it uses gpu. Tutorial I found that tutorial but i can't even find the download for Nvidia cuDNN v7.6.3 for CUDA 10.1. It feels very unclear on how to use it with gpu please help me :D
var image = pictureBox1.Image;
var scorer = new YoloScorer<YoloCocoP5Model>("Assets/Weights/yolov5n.onnx");
List<YoloPrediction> predictions = scorer.Predict(image);
var graphics = Graphics.FromImage(image);
foreach (var prediction in predictions) // iterate predictions to draw results
{
using (MemoryStream ms = new MemoryStream())
{
pictureBox1.Image.Save(ms, ImageFormat.Png);
prediction.Label.Color = Color.FromArgb(255, 255, 0, 0);
double score = Math.Round(prediction.Score, 2);
graphics.DrawRectangles(new Pen(prediction.Label.Color, 1),
new[] { prediction.Rectangle });
var (x, y) = (prediction.Rectangle.X - 3, prediction.Rectangle.Y - 23);
graphics.DrawString($"{prediction.Label.Name} ({score})",
new Font("Consolas", 16, GraphicsUnit.Pixel), new SolidBrush(prediction.Label.Color),
new PointF(x, y));
pictureBox1.Image = image;
}
}
Before you are going to use scorer you need an option set.
//https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html
bool initResult = false;
var cudaProviderOptions = new Microsoft.ML.OnnxRuntime.OrtCUDAProviderOptions(); // Dispose this finally
var providerOptionsDict = new Dictionary<string, string>();
providerOptionsDict["device_id"] = "0";
providerOptionsDict["gpu_mem_limit"] = "2147483648";
providerOptionsDict["arena_extend_strategy"] = "kSameAsRequested";
/*
cudnn_conv_algo_search
The type of search done for cuDNN convolution algorithms.
Value Description
EXHAUSTIVE (0) expensive exhaustive benchmarking using cudnnFindConvolutionForwardAlgorithmEx
HEURISTIC (1) lightweight heuristic based search using cudnnGetConvolutionForwardAlgorithm_v7
DEFAULT (2) default algorithm using CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM
Default value: EXHAUSTIVE
*/
providerOptionsDict["cudnn_conv_algo_search"] = "DEFAULT";
/*
do_copy_in_default_stream
Whether to do copies in the default stream or use separate streams. The recommended setting is true. If false, there are race conditions and possibly better performance.
Default value: true
*/
providerOptionsDict["do_copy_in_default_stream"] = "1";
/*
cudnn_conv_use_max_workspace
Check tuning performance for convolution heavy models for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. The V2 provider options struct can be created using this and updated using this. Please take a look at the sample below for an example.
Default value: 0
*/
providerOptionsDict["cudnn_conv_use_max_workspace"] = "1";
/*
cudnn_conv1d_pad_to_nc1d
Check convolution input padding in the CUDA EP for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. The V2 provider options struct can be created using this and updated using this. Please take a look at the sample below for an example.
Default value: 0
*/
providerOptionsDict["cudnn_conv1d_pad_to_nc1d"] = "1";
cudaProviderOptions.UpdateOptions(providerOptionsDict);
options = SessionOptions.MakeSessionOptionWithCudaProvider(cudaProviderOptions); // Dispose this finally
if (options != null)
{
// check yolo model file is accesible
if (File.Exists(yoloModelFile))
{
scorer = new YoloScorer<YoloCocoP5Model>(yoloModelFile, options);
initResult = true;
}
else
{
DebugMessage("Yolo model ONNX file (" + yoloModelFile + ") is missing!\r\n", 2);
}
}
else
DebugMessage("Yolo instance initializing error! Session options are empty!\r\n", 2);
}

Accord.net: Multilabel classification issue

I am trying to upgrade my code for multi-label classification from version 3.0.2. to 3.3.0. I did the upgrade, but it seems that classification results were much worse now - algorithm is not able to classify many more instances than before. Can you please tell me if there is any issue in my new code. How to use estimated parameters for gauss kernel as I did before?
Old code:
var gauss = Gaussian.Estimate(inputs, Convert.ToInt32(inputs.GetLength(0) * 0.8));
// Create the machine.
this.machine = new MultilabelSupportVectorMachine(inputs[0].Length, gauss, outputs[0].Length);
// Train the model.
var teacher = new MultilabelSupportVectorLearning(this.machine, inputs, outputs);
teacher.Algorithm = (svm, classInputs, classOutputs, i, j) => new SequentialMinimalOptimization(svm, classInputs, classOutputs) { UseComplexityHeuristic = true, CacheSize = 1000 };
teacher.SubproblemFinished += SubproblemFinished;
var error = teacher.Run(true);
// Save the model.
this.machine.Save(modelPath);
New code:
var gauss = Gaussian.Estimate(inputs, Convert.ToInt32(inputs.GetLength(0) * 0.8));
// Create the machine.
this.machine = new MultilabelSupportVectorMachine<Gaussian>(inputs[0].Length, gauss, outputs[0].Length);
// Create the multi-class learning algorithm for the machine
var teacher = new MultilabelSupportVectorLearning<Gaussian>(this.machine)
{
// Configure the learning algorithm to use SMO to train the
// underlying SVMs in each of the binary class subproblems.
Learner = (param) => new SequentialMinimalOptimization<Gaussian>()
{
// Estimate a suitable guess for the Gaussian kernel's parameters.
// This estimate can serve as a starting point for a grid search.
UseKernelEstimation = true,
UseComplexityHeuristic = true,
CacheSize = 1000
}
};
teacher.ParallelOptions.MaxDegreeOfParallelism = 1;
// Learn a machine
machine = teacher.Learn(inputs, outputs);
Serializer.Save(machine, modelPath);
Thanks in advance!

SharpDx direct3d11 how to start rendering

I want to use directx on C# and I am using SharpDX wrapper. I got a book called Direct3D rendering cookbook and I got the basic code from it. I want to create a 3d world view. For that I will need a camera view and a grid that helps to recognize world position just like in Autodesk Maya but I do not know how to do that. My mind is rally mixed what should I do to start ?
Here I have code that is ready to render something I think:
using System;
using SharpDX.Windows;
using SharpDX.DXGI;
using SharpDX.Direct3D11;
using Device = SharpDX.Direct3D11.Device;
using Device1 = SharpDX.Direct3D11.Device1;
namespace CurrencyConverter
{
static class Program
{[STAThread]
static void Main()
{
// Enable object tracking
SharpDX.Configuration.EnableObjectTracking = true;
SharpDX.Animation.Timer timer = new SharpDX.Animation.Timer();
#region Direct3D Initialization
// Create the window to render to
Form1 form = new Form1();
form.Text = "D3DRendering - EmptyProject";
form.Width = 640;
form.Height = 480;
// Declare the device and swapChain vars
Device device;
SwapChain swapChain;
// Create the device and swapchain
// First create a regular D3D11 device
using (var device11 = new Device(
SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.None,
new[] {
SharpDX.Direct3D.FeatureLevel.Level_11_1,
SharpDX.Direct3D.FeatureLevel.Level_11_0,
}))
{
// Query device for the Device1 interface (ID3D11Device1)
device = device11.QueryInterfaceOrNull<Device1>();
if (device == null)
throw new NotSupportedException(
"SharpDX.Direct3D11.Device1 is not supported");
}// Rather than create a new DXGI Factory we reuse the
// one that has been used internally to create the device
using (var dxgi = device.QueryInterface<SharpDX.DXGI.Device2>())
using (var adapter = dxgi.Adapter)
using (var factory = adapter.GetParent<Factory2>())
{
var desc1 = new SwapChainDescription1()
{
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
Format = Format.R8G8B8A8_UNorm,
Stereo = false,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.BackBuffer | Usage.RenderTargetOutput,
BufferCount = 1,
Scaling = Scaling.Stretch,
SwapEffect = SwapEffect.Discard,
};
swapChain = new SwapChain1(factory,
device,
form.Handle,
ref desc1,
new SwapChainFullScreenDescription()
{
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Centered,
Windowed = true
},
// Restrict output to specific Output (monitor)
adapter.Outputs[0]);
}
// Create references for backBuffer and renderTargetView
var backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain,
0);
var renderTargetView = new RenderTargetView(device,
backBuffer);
#endregion
// Setup object debug names
device.DebugName = "The Device";
swapChain.DebugName = "The SwapChain";
backBuffer.DebugName = "The Backbuffer";
renderTargetView.DebugName = "The RenderTargetView";
#region Render loop
// Create and run the render loop
RenderLoop.Run(form, () =>
{
// Clear the render target with...
var lerpColor = SharpDX.Color.Lerp(SharpDX.Color.White,
SharpDX.Color.DarkBlue,
(float)((timer.Time) / 10.0 % 1.0));
device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
lerpColor);
// Execute rendering commands here...
//...
//I DO NOT HAVE ANY IDEA
//...
// Present the frame
swapChain.Present(0, PresentFlags.RestrictToOutput);
});
#endregion
#region Direct3D Cleanup
// Release the device and any other resources created
renderTargetView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
#endregion
}
}
}
Generally speaking, with Direct3D you need a substantial amount of code before to have anything happening on the screen.
In the SharpDX repository you have the MiniCube sample which contains enough to really get you started, as it has all the elements required to draw a 3d scene.
I recommend to particularily look for:
Depth buffer creation (DepthStencilView)
fx file, as you need shaders to have anything on the screen (no more fixed funtion)
How the Vertex Buffer is created, you need to split geometry in triangles (in common cases, there are other possibilities).
Don't forget the SetViewport (it's really common to have it omitted)
The calls referring to Input Assembler are assigning the geometry to be drawn
Constant buffer creation : this is to pass matrices and changing data (like diffuse)
Also make sure to have DeviceCreationFlags.None with the Device.CreateWithSwapChain call, and in visual studio debug options, use "Enable Native Code Debugging". This will give you errors and warnings if something is not set properly, plus a meaningful reason in case any for the resource creation fails (instead of "Invalid Args", which is quite pointless).
As another recommendation, all the Direct3D11 resource creation parameters are incredibly error prone and tedious (many options are non compatible between each other), so it quite important to wrap those into some easier to use helper functions (and make a small amount of unit tests to validate them once and for all). The old Toolkit has quite some of those examples
SharpDX wrapper is relatively close to the c++ counterpart, so anything in the c++ documentation applies to it too.

Kinect V2- Saving Body Data

I am writing a program in C# with the Kinect V2 SDK and am endeavoring to record the Body data. I attempted to just save the BodyFrame to a file, but it is not serializable and neither is the Body class. What would be the best way to accomplish this? My current thinking is to save the positional data for each joint in each body in each frame, but I have a feeling that will get complicated very fast. Suggestions? I've attached a portion of Microsoft's sample BodyBasics code below where it breaks the body data into joints. Thanks!
if (body.IsTracked)
{
//this.DrawClippedEdges(body, dc);
IReadOnlyDictionary<JointType, Joint> joints = body.Joints;
// convert the joint points to depth (display) space
Dictionary<JointType, Point> jointPoints = new Dictionary<JointType, Point>();
foreach (JointType jointType in joints.Keys)
{
// sometimes the depth(Z) of an inferred joint may show as negative
// clamp down to 0.1f to prevent coordinatemapper from returning (-Infinity, -Infinity)
CameraSpacePoint position = joints[jointType].Position;
if (position.Z < 0)
{
position.Z = InferredZPositionClamp;
}
DepthSpacePoint depthSpacePoint = this.coordinateMapper.MapCameraPointToDepthSpace(position);
jointPoints[jointType] = new Point(depthSpacePoint.X, depthSpacePoint.Y);
}
Use JSON.net.
static List<Body> trackedBodies = new List<Body>();
trackedBodies = bodies.Where(b => b.IsTracked == true).ToList();
if (trackedBodies.Count() < 1)
return null;
string kinectBodyDataString = JsonConvert.SerializeObject(trackedBodies);

Categories