Kinect Depth FPS slows Significantly when Human Detected - c#

I have a application where I create my own Depth Frame (using the Kinect SDK). The problem is when a human is detected the FPS of the depth (and then the color too) slows down significantly. Here is a movie of when the frame slows down. The code I am using:
using (DepthImageFrame DepthFrame = e.OpenDepthImageFrame())
{
depthFrame = DepthFrame;
pixels1 = GenerateColoredBytes(DepthFrame);
depthImage = BitmapSource.Create(
depthFrame.Width, depthFrame.Height, 96, 96, PixelFormats.Bgr32, null, pixels1,
depthFrame.Width * 4);
depth.Source = depthImage;
}
...
private byte[] GenerateColoredBytes(DepthImageFrame depthFrame2)
{
short[] rawDepthData = new short[depthFrame2.PixelDataLength];
depthFrame.CopyPixelDataTo(rawDepthData);
byte[] pixels = new byte[depthFrame2.Height * depthFrame2.Width * 4];
const int BlueIndex = 0;
const int GreenIndex = 1;
const int RedIndex = 2;
for (int depthIndex = 0, colorIndex = 0;
depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
depthIndex++, colorIndex += 4)
{
int player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask;
int depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;
byte intensity = CalculateIntensityFromDepth(depth);
pixels[colorIndex + BlueIndex] = intensity;
pixels[colorIndex + GreenIndex] = intensity;
pixels[colorIndex + RedIndex] = intensity;
if (player > 0)
{
pixels[colorIndex + BlueIndex] = Colors.Gold.B;
pixels[colorIndex + GreenIndex] = Colors.Gold.G;
pixels[colorIndex + RedIndex] = Colors.Gold.R;
}
}
return pixels;
}
FPS is quite crucial to me since I am making an app that saves pictures of people when they are detected. How can I maintain a faster FPS? Why is my application doing this?

G.Y is correct that you're not disposing properly. You should refactor your code so the DepthImageFrame is disposed of ASAP.
...
private short[] rawDepthData = new short[640*480]; // assuming your resolution is 640*480
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
depthFrame.CopyPixelDataTo(rawDepthData);
}
pixels1 = GenerateColoredBytes(rawDepthData);
...
private byte[] GenerateColoredBytes(short[] rawDepthData){...}
You said that you're using the depth frame elsewhere in the application. This is bad. If you need some specific data from the depth frame, save it separately.
dowhilefor is also correct that you should look at using a WriteableBitmap, it's super simple.
private WriteableBitmap wBitmap;
//somewhere in your initialization
wBitmap = new WriteableBitmap(...);
depth.Source = wBitmap;
//Then to update the image:
wBitmap.WritePixels(...);
Also, you're creating new arrays to store pixel data again and again on every frame. You should create these arrays as global variables, create them a single time, and then just overwrite them on every frame.
Finally, although this shouldn't make a huge difference, I'm curious about your CalculateIntensityFromDepth method. If the compiler isn't inlining that method, that's a lot of extraneous method calls. Try to remove that method and just write the code where the method call is right now.

Related

Why would frames drop considerably the longer a Unity Game runs?

I am using Unity 2020.3.4f1 to create a 2D game for mobile.
I created a map builder for the game.
When I hit play, it makes a bunch of 'maps' that are saved as json files. The longer the game plays seems to result in extremely slow frame rates (200fps -> 2 fps) based on the stats menu while the game is running.
The strange thing is if I go to the "Scene" tab and left click on a sprite the fps instantly jumps back up again.
Screenshots
The problem seems related to taking screenshots within Unity.
The big bulge happens & only resets when I un-pause the game.
Questions
Why would the frame rates drop considerably over time the longer the game is running?
Why would the frame rate jump back up after selecting a sprite in the "Scene" tab?
What happens in Unity when selecting a sprite in the "Scene" tab? Is there a garbage collection method?
Script:
private void Awake()
{
myCamera = gameObject.GetComponent<Camera>();
instance = this;
width = 500;
height = 500;
}
private void OnPostRender()
{
if(takeScreenShotOnNextFrame)
{
takeScreenShotOnNextFrame = false;
RenderTexture renderTexture = myCamera.targetTexture;
Texture2D renderResult = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
Rect rect = new Rect(0, 0, renderTexture.width, renderTexture.height);
renderResult.ReadPixels(rect, 0, 0);
float myValue = 0;
float totalPixels = renderResult.width * renderResult.height;
for (int i = 0; i < renderResult.width; i++)
{
for (int j = 0; j < renderResult.height; j++)
{
Color myColor = renderResult.GetPixel(i, j);
myValue += myColor.r;
//Debug.Log("Pixel (" + i + "," + j + "): " + myColor.r);
}
}
occlusion = ((myValue / totalPixels) * 100);
byte[] byteArray = renderResult.EncodeToPNG();
System.IO.File.WriteAllBytes(Application.dataPath + "/Resources/ScreenShots/CameraScreenshot.png", byteArray);
// Cleanup
RenderTexture.ReleaseTemporary(renderTexture);
myCamera.targetTexture = null;
renderResult = null;
}
}
private void TakeScreenshot(int screenWidth, int screenHeight)
{
width = screenWidth;
height = screenHeight;
if(myCamera.targetTexture != null)
{
RenderTexture.ReleaseTemporary(myCamera.targetTexture);
//Debug.Log("Camera target texture null: " + myCamera.targetTexture == null);
}
myCamera.targetTexture = RenderTexture.GetTemporary(width, height, 16);
takeScreenShotOnNextFrame = true;
}
public static void TakeScreenshot_Static(int screenWidth, int screenHeight)
{
instance.TakeScreenshot(screenWidth, screenHeight);
}
}

How to change MediaCapture to Byte[]

How to change MediaCapture to byte[] in Windows Store App for Windows 8.1.
From lib:
Windows.Media.Capture.MediaCapture asd = new
Windows.Media.Capture.MediaCapture();
Thans!
I assume you want to get a byte array from what the camera is seeing at the moment, although it's hard to interpret from your question.
There is a sample on the Microsoft github page that is relevant, although they target Windows 10. You may be interested in migrating your project to get this functionality.
GetPreviewFrame: This sample will capture preview frames as opposed to full-blown photos, but it should be a good starting point. Once it has a preview frame, it can edit the pixels on it.
Here is the relevant part:
private async Task GetPreviewFrameAsSoftwareBitmapAsync()
{
// Get information about the preview
var previewProperties = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview) as VideoEncodingProperties;
// Create the video frame to request a SoftwareBitmap preview frame
var videoFrame = new VideoFrame(BitmapPixelFormat.Bgra8, (int)previewProperties.Width, (int)previewProperties.Height);
// Capture the preview frame
using (var currentFrame = await _mediaCapture.GetPreviewFrameAsync(videoFrame))
{
// Collect the resulting frame
SoftwareBitmap previewFrame = currentFrame.SoftwareBitmap;
// Add a simple green filter effect to the SoftwareBitmap
EditPixels(previewFrame);
}
}
private unsafe void EditPixels(SoftwareBitmap bitmap)
{
// Effect is hard-coded to operate on BGRA8 format only
if (bitmap.BitmapPixelFormat == BitmapPixelFormat.Bgra8)
{
// In BGRA8 format, each pixel is defined by 4 bytes
const int BYTES_PER_PIXEL = 4;
using (var buffer = bitmap.LockBuffer(BitmapBufferAccessMode.ReadWrite))
using (var reference = buffer.CreateReference())
{
// Get a pointer to the pixel buffer
byte* data;
uint capacity;
((IMemoryBufferByteAccess)reference).GetBuffer(out data, out capacity);
// Get information about the BitmapBuffer
var desc = buffer.GetPlaneDescription(0);
// Iterate over all pixels
for (uint row = 0; row < desc.Height; row++)
{
for (uint col = 0; col < desc.Width; col++)
{
// Index of the current pixel in the buffer (defined by the next 4 bytes, BGRA8)
var currPixel = desc.StartIndex + desc.Stride * row + BYTES_PER_PIXEL * col;
// Read the current pixel information into b,g,r channels (leave out alpha channel)
var b = data[currPixel + 0]; // Blue
var g = data[currPixel + 1]; // Green
var r = data[currPixel + 2]; // Red
// Boost the green channel, leave the other two untouched
data[currPixel + 0] = b;
data[currPixel + 1] = (byte)Math.Min(g + 80, 255);
data[currPixel + 2] = r;
}
}
}
}
}
And declare this outside your class:
[ComImport]
[Guid("5b0d3235-4dba-4d44-865e-8f1d0e4fd04d")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
Have a closer look at the sample to see how to get all the details. Or, to have a walkthrough, you can watch the camera session from the recent //build/ conference, which includes a little bit of a walkthrough through some camera samples.

Eye detection using OpenCVSharp in Unity (fps issues)

I'm currently working on a project involving integrating OpenCVSharp into Unity, to allow eye tracking within a game environment. I've managed to get OpenCVSharp integrated into the Unity editor and currently have eye-detection (not tracking) working within a game. It can find your eyes within a webcam image, then display where its currently detected them on a texture, which I display within the scene.
However its causing a HUGE fps drop, mainly because every frame its converting a webcam texture into an IPLimage so that OpenCV can handle it. It then has to convert it back to a 2Dtexture to be displayed within the scene, after its done all the eye detection. So understandably its too much for the CPU to handle. (As far as I can tell its only using 1 core on my CPU).
Is there a way to do all the eye detection without converting the texture to an IPLimage? Or any other way to fix the fps drop. Some things that I've tried include:
Limiting the frames that it updates on. However this just causes it
to run smoothly, then stutter horribly on the frame that it has to
update.
Looking at threading, but as far as I'm aware Unity doesn't allow it.
As far as I can tell its only using 1 core on my CPU which seems a bit silly. If there was a way to change this it could fix the issue?
Tried different resolutions on the camera, however the resolution that the game can actually run smoothly at, is too small for the eye's to actually be detected, let alone tracked.
I've included the code below, of if you would prefer to look at it in a code editor here is a link to the C# File. Any suggestions or help would be greatly appreciated!
For reference I used code from here (eye detection using opencvsharp).
using UnityEngine;
using System.Collections;
using System;
using System.IO;
using OpenCvSharp;
//using System.Xml;
//using OpenCvSharp.Extensions;
//using System.Windows.Media;
//using System.Windows.Media.Imaging;
public class CaptureScript : MonoBehaviour
{
public GameObject planeObj;
public WebCamTexture webcamTexture; //Texture retrieved from the webcam
public Texture2D texImage; //Texture to apply to plane
public string deviceName;
private int devId = 1;
private int imWidth = 640; //camera width
private int imHeight = 360; //camera height
private string errorMsg = "No errors found!";
static IplImage matrix; //Ipl image of the converted webcam texture
CvColor[] colors = new CvColor[]
{
new CvColor(0,0,255),
new CvColor(0,128,255),
new CvColor(0,255,255),
new CvColor(0,255,0),
new CvColor(255,128,0),
new CvColor(255,255,0),
new CvColor(255,0,0),
new CvColor(255,0,255),
};
const double Scale = 1.25;
const double ScaleFactor = 2.5;
const int MinNeighbors = 2;
// Use this for initialization
void Start ()
{
//Webcam initialisation
WebCamDevice[] devices = WebCamTexture.devices;
Debug.Log ("num:" + devices.Length);
for (int i=0; i<devices.Length; i++) {
print (devices [i].name);
if (devices [i].name.CompareTo (deviceName) == 1) {
devId = i;
}
}
if (devId >= 0) {
planeObj = GameObject.Find ("Plane");
texImage = new Texture2D (imWidth, imHeight, TextureFormat.RGB24, false);
webcamTexture = new WebCamTexture (devices [devId].name, imWidth, imHeight, 30);
webcamTexture.Play ();
matrix = new IplImage (imWidth, imHeight, BitDepth.U8, 3);
}
}
void Update ()
{
if (devId >= 0)
{
//Convert webcam texture to iplimage
Texture2DtoIplImage();
/*DO IMAGE MANIPULATION HERE*/
//do eye detection on iplimage
EyeDetection();
/*END IMAGE MANIPULATION*/
if (webcamTexture.didUpdateThisFrame)
{
//convert iplimage to texture
IplImageToTexture2D();
}
}
else
{
Debug.Log ("Can't find camera!");
}
}
void EyeDetection()
{
using(IplImage smallImg = new IplImage(new CvSize(Cv.Round (imWidth/Scale), Cv.Round(imHeight/Scale)),BitDepth.U8, 1))
{
using(IplImage gray = new IplImage(matrix.Size, BitDepth.U8, 1))
{
Cv.CvtColor (matrix, gray, ColorConversion.BgrToGray);
Cv.Resize(gray, smallImg, Interpolation.Linear);
Cv.EqualizeHist(smallImg, smallImg);
}
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
{
storage.Clear ();
CvSeq<CvAvgComp> eyes = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
for(int i = 0; i < eyes.Total; i++)
{
CvRect r = eyes[i].Value.Rect;
CvPoint center = new CvPoint{ X = Cv.Round ((r.X + r.Width * 0.5) * Scale), Y = Cv.Round((r.Y + r.Height * 0.5) * Scale) };
int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
matrix.Circle (center, radius, colors[i % 8], 3, LineType.AntiAlias, 0);
}
}
}
}
void OnGUI ()
{
GUI.Label (new Rect (200, 200, 100, 90), errorMsg);
}
void IplImageToTexture2D ()
{
int jBackwards = imHeight;
for (int i = 0; i < imHeight; i++) {
for (int j = 0; j < imWidth; j++) {
float b = (float)matrix [i, j].Val0;
float g = (float)matrix [i, j].Val1;
float r = (float)matrix [i, j].Val2;
Color color = new Color (r / 255.0f, g / 255.0f, b / 255.0f);
jBackwards = imHeight - i - 1; // notice it is jBackward and i
texImage.SetPixel (j, jBackwards, color);
}
}
texImage.Apply ();
planeObj.renderer.material.mainTexture = texImage;
}
void Texture2DtoIplImage ()
{
int jBackwards = imHeight;
for (int v=0; v<imHeight; ++v) {
for (int u=0; u<imWidth; ++u) {
CvScalar col = new CvScalar ();
col.Val0 = (double)webcamTexture.GetPixel (u, v).b * 255;
col.Val1 = (double)webcamTexture.GetPixel (u, v).g * 255;
col.Val2 = (double)webcamTexture.GetPixel (u, v).r * 255;
jBackwards = imHeight - v - 1;
matrix.Set2D (jBackwards, u, col);
//matrix [jBackwards, u] = col;
}
}
}
}
You can move these out of the per frame update loop :
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
No reason to be building the recognizer graph each frame.
Threading is the logical way to go moving forward if you want real speed updates, unity itself is not threaded, but you can fold in other threads if your careful.
Do the texture -> ipl image on the main thread then trigger an event to fire off your thread.
The thread can do all the CV work, probably construct the tex2d and then push back to main to render.
You should also be able to gain some performance improvements if you use:
Color32[] pixels;
pixels = new Color32[webcamTexture.width * webcamTexture.height];
webcamTexture.GetPixels32(pixels);
The Unity doco suggests that this can be quite a bit faster than calling "GetPixels" (and certainly faster than calling GetPixel for each pixel), and then you don't need to scale each RGB channel against 255 manually.

C# Drawing.Imaging dropping GIF frames if they are identical

I need to load GIF animations and convert them frame by frame to bitmaps. To do that, I am extracting my GIF file frame by frame using Drawing.Imaging library and then casting each frame to bitmap.
Everything works just fine except the times when the consecutive frames are the same, when there is no pixel difference. The library seems to be dropping such frames.
I came to that conslusion with a simple test. I have created an animation of a circle that is growing and shrinking with a pause between the moment the last circle dissapears and the new one is not yet shown. When I play the animation consisting of my extracted bitmaps that pause is not present. If I compare the GIFs of the same frame-length but different amounts of identical frames, the returned totalframescount value is different. I have also observed that webbrowsers display identical consecutive frames correctly.
public void DrawGif(Image img)
{
FrameDimension dimension = new FrameDimension(img.FrameDimensionsList[0]);
int frameCountTotal = img.GetFrameCount(dimension);
for (int framecount = 0; framecount < frameCountTotal; framecount++)
{
img.SelectActiveFrame(dimension, framecount);
Bitmap bmp = new Bitmap(img); //cast Image type to Bitmap
for (int i = 0; i < 16; i++)
{
for (int j = 0; j < 16; j++)
{
Color color = bmp.GetPixel(i, j);
DrawPixel(i, j, 0, color.R, color.G, color.B);
}
}
Does someone encountered such a problem?
As I am pretty new to C# - is there a way to modify .NET lib ?
Maybe there is a solution to my problem that I am not aware of, that does not involve changing the library?
Updated code - the result is the same
public void DrawGif(img)
{
int frameCountTotal = img.GetFrameCount(FrameDimension.Time);
for (int framecount = 0; framecount < frameCountTotal; framecount++)
{
img.SelectActiveFrame(FrameDimension.Time, framecount);
Bitmap bmp = new Bitmap(img);
for (int i = 0; i < 16; i++)
{
for (int j = 0; j < 16; j++)
{
Color color = bmp.GetPixel(i, j);
DrawPixel(i, j, 0, color.R, color.G, color.B);
}
The Image is not saving duplicate frames, so you have to take the time of each frame into account. Here is some sample code based on this book in Windows Programming on how to get all the frames and the correct duration. An example:
public class Gif
{
public static List<Frame> LoadAnimatedGif(string path)
{
//If path is not found, we should throw an IO exception
if (!File.Exists(path))
throw new IOException("File does not exist");
//Load the image
var img = Image.FromFile(path);
//Count the frames
var frameCount = img.GetFrameCount(FrameDimension.Time);
//If the image is not an animated gif, we should throw an
//argument exception
if (frameCount <= 1)
throw new ArgumentException("Image is not animated");
//List that will hold all the frames
var frames = new List<Frame>();
//Get the times stored in the gif
//PropertyTagFrameDelay ((PROPID) 0x5100) comes from gdiplusimaging.h
//More info on http://msdn.microsoft.com/en-us/library/windows/desktop/ms534416(v=vs.85).aspx
var times = img.GetPropertyItem(0x5100).Value;
//Convert the 4bit duration chunk into an int
for (int i = 0; i < frameCount; i++)
{
//convert 4 bit value to integer
var duration = BitConverter.ToInt32(times, 4*i);
//Add a new frame to our list of frames
frames.Add(
new Frame()
{
Image = new Bitmap(img),
Duration = duration
});
//Set the write frame before we save it
img.SelectActiveFrame(FrameDimension.Time, i);
}
//Dispose the image when we're done
img.Dispose();
return frames;
}
}
We need a structure to save the Bitmap and duration for each Frame
//Class to store each frame
public class Frame
{
public Bitmap Image { get; set; }
public int Duration { get; set;}
}
The code will load a Bitmap, check whether it is a multi-frame animated GIF. It then loops though all the frames to construct a list of separate Frame objects that hold the bitmap and duration of each frame. Simple use:
var frameList = Gif.LoadAnimatedGif ("a.gif");
var i = 0;
foreach(var frame in frameList)
frame.Image.Save ("frame_" + i++ + ".png");

Kinect Depth Image only partly visible

I am new to Kinect and C#. I am trying to get the Depth Image from the Kinect, convert it to a bitmap to perform some OpenCV operations and then display it. The problem is, I am getting only a third of the depth image and the rest is completely black(as seen in the picture). This is not the raw depth image but the image that I receive after painting.
Here is the code-
image and image1 are the two image canvas i have for display.
void DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
DepthImageFrame Image;
Bitmap bm;
using (Image = e.OpenDepthImageFrame())
{
if (Image != null)
{
this.shortpixeldata = new short[Image.PixelDataLength];
this.depthFrame32 = new byte[Image.Width * Image.Height * Bgr32BytesPerPixel];
bmp = new Bitmap(Image.Width, Image.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
Image.CopyPixelDataTo(this.shortpixeldata);
byte[] convertedDepthBits = this.ConvertDepthFrame(this.shortpixeldata, ((KinectSensor)sender).DepthStream);
BitmapData bmapdata = bmp.LockBits(
new System.Drawing.Rectangle(0, 0, Image.Width, Image.Height),
ImageLockMode.WriteOnly,
bmp.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(convertedDepthBits, 0, ptr, Image.PixelDataLength);
bmp.UnlockBits(bmapdata);
MemoryStream ms1 = new MemoryStream();
bmp.Save(ms1, System.Drawing.Imaging.ImageFormat.Jpeg);
System.Windows.Media.Imaging.BitmapImage bImg = new System.Windows.Media.Imaging.BitmapImage();
bImg.BeginInit();
bImg.StreamSource = new MemoryStream(ms1.ToArray());
bImg.EndInit();
image.Source = bImg;
if (bmp != null)
{
Image<Bgr, Byte> currentFrame = new Image<Bgr, Byte>(bmp);
Image<Gray, Byte> grayImage = currentFrame.Convert<Gray, Byte>().PyrDown().PyrUp();
Image<Gray, Byte> Dest = new Image<Gray, Byte>(grayImage.Size);
CvInvoke.cvCanny(grayImage, Dest, 10, 60, 3);
image1.Source = ToBitmapSource(Dest);
CalculateFps();
}
}
else
{
System.Diagnostics.Debug.WriteLine("depth bitmap empty :/");
}
}
}
private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
{
System.Diagnostics.Debug.WriteLine("depthframe len :{0}", depthFrame.Length);
for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < this.depthFrame32.Length; i16++, i32 += 4)
{
int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;
byte Distance = 0;
int MinimumDistance = 800;
int MaximumDistance = 4096;
if (realDepth > MinimumDistance)
{
//White = Close
//Black = Far
Distance = (byte)(255-((realDepth-MinimumDistance)*255/(MaximumDistance-MinimumDistance)));
this.depthFrame32[i32 + RedIndex] = (byte)(Distance);
this.depthFrame32[i32 + GreenIndex] = (byte)(Distance);
this.depthFrame32[i32 + BlueIndex] = (byte)(Distance);
}
else
{
this.depthFrame32[i32 + RedIndex] = 0;
this.depthFrame32[i32 + GreenIndex] = 150;
this.depthFrame32[i32 + BlueIndex] = 0;
}
}
return this.depthFrame32;
}
I tried different PixelFormats to no avail. I can't figure out the problem. Does someone have any idea what I'm doing wrong?
Thanks
I would suggest using a WritableBitmap in order to copy the depth image to a viewable format. In the case of the PixelFormat, that information is available in the depth image itself so you should be using the same format for the WritableBitmap as is being captured.
Have you looked at any of the examples provided by Microsoft? You can find them at the Kinect for Windows Samples CodePlex page. There are several samples that demonstrate how to copy the depth data into a WritableBitmap and then output it. For example, here is the DepthFrameReady callback function of the "DepthBasics-WPF" sample application:
/// <summary>
/// Event handler for Kinect sensor's DepthFrameReady event
/// </summary>
/// <param name="sender">object sending the event</param>
/// <param name="e">event arguments</param>
private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
// Copy the pixel data from the image to a temporary array
depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);
// Get the min and max reliable depth for the current frame
int minDepth = depthFrame.MinDepth;
int maxDepth = depthFrame.MaxDepth;
// Convert the depth to RGB
int colorPixelIndex = 0;
for (int i = 0; i < this.depthPixels.Length; ++i)
{
// Get the depth for this pixel
short depth = depthPixels[i].Depth;
// To convert to a byte, we're discarding the most-significant
// rather than least-significant bits.
// We're preserving detail, although the intensity will "wrap."
// Values outside the reliable depth range are mapped to 0 (black).
// Note: Using conditionals in this loop could degrade performance.
// Consider using a lookup table instead when writing production code.
// See the KinectDepthViewer class used by the KinectExplorer sample
// for a lookup table example.
byte intensity = (byte)(depth >= minDepth && depth <= maxDepth ? depth : 0);
// Write out blue byte
this.colorPixels[colorPixelIndex++] = intensity;
// Write out green byte
this.colorPixels[colorPixelIndex++] = intensity;
// Write out red byte
this.colorPixels[colorPixelIndex++] = intensity;
// We're outputting BGR, the last byte in the 32 bits is unused so skip it
// If we were outputting BGRA, we would write alpha here.
++colorPixelIndex;
}
// Write the pixel data into our bitmap
this.colorBitmap.WritePixels(
new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
this.colorPixels,
this.colorBitmap.PixelWidth * sizeof(int),
0);
}
}
}
The complete code for this particular class can be found here: http://kinectforwindows.codeplex.com/SourceControl/changeset/view/861462899ae7#v1.x/ToolkitSamples1.6.0/C#/DepthBasics-WPF/MainWindow.xaml.cs
The "Kinect Explorer" example is another good one to review, as it examines all three streams at once. It requires library that is not included in the CodePlex repository, but can be found in the Kinect for Windows Toolkit.
Okay, I figured it out on my own.
It was hiding in plain sight all along.
The function ConvertDepthFrame returns the byte array to convertedDepthBits in a different size (its 4 separate channels so 4x the original size), I need to use the length of data to be copied as 4*Image.PixelDataLength in the method call: Marshal.Copy(...)
Working fine now.
Phew! :)

Categories