Streaming consecutive images received from Kinect's RGB camera on WPF? - c#

I'm gathering consecutive images from Kinect's RGB camera from the code below;
PlanarImage Image = e.ImageFrame.Image;
video.Source = BitmapSource.Create(
Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, Image.Bits, Image.Width * Image.BytesPerPixel);
And i want to stream the series of images that i get from Kinect as a real time stream on my WPF application, but I don't know how to. The reason is that i want to be able to use the Kinect as a webcam, but the other solutions (coding a directshow filter, or using only one available one) didn't worked for me for resolution or my lack of knowledge in C++ issues. Writing a direct show filter for Kinect to act as a virtual cam is extremely challenging for someone on my level with C++.
To sum up, problem lies within the output of Kinect, it gives me a series of images which i don't really know how to turn it to a real time stream that I can display from other applications.
Any help is appreciated thanks!

IF a commercial library is an option take a look at http://www.visioforge.com/video-capture-sdk-net2.html - it can capture and also stream in WMV...
Another commercial option would be Leadtools Videostreaming SDK - see http://www.leadtools.com/sdk/video-streaming.htm

DirectShow is indeed your best bet. There is nothing in WPF for streaming video to other apps.

As mentioned in the comments above the Kinect produces a series of Images instead of generating a streaming video.
This page is going to be helpful while extracting depth data from the Kinect in Microsoft Kinect SDK:
http://www.i-programmer.info/programming/hardware/2714-getting-started-with-microsoft-kinect-sdk-depth.html
If you are using processing you can import the SimpleOpenNI library(https://code.google.com/p/simple-openni/) which supports video capture (you can find examples within the library).
This video stream comes in .ONI format, which can be converted into AVI via the following library:
http://kirilllykov.github.io/blog/2013/03/19/convert-openni-star-dot-oni-files-into-avi/
All the best, let us know if you find any other method.

Related

Raw H264 to JPEG, in C#

I get a stream of frames, an initial SPS and PPS h264 data packet and then packets for the I and P frames.
Using c# .NET I want to convert into a series of JPEGs? Has anyone done this?
I have tried AForge.NET FFMpeg wrapper, but can only go from MP4 file to JPEGs?
Also looked at DirectShow.
I can't seem to find an examples that even come close to doing this?
Thanks
In Directshow, it sounds like you need to introduce a SampleGrabber filter into your filter graph. Insert this after the H264 decoder.
This is a pass through filter which can receive a callback containing each video frame. You can then obviously choose what you do with the frame, is save it to disk as a jpeg etc.
Rather than regurgitate MSDN, there is a great page explaining its usage here:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd407288(v=vs.85).aspx
Update taking account of Roman's comments:
SampleGrabber is deprecated as its part of Directshow Editing Services. However, it is quite self contained. If it was removed from a later version of Windows it would be straight forward to replace with an alternative filter. I still use it in one of my consumer applications. Roman is correct though - it has a steep learning curve.

displaying Video Image in winRT gridview

I'm developing a Windows 8 Store App, I have this problem,
I want the user to add the videos from a file picker and i managed it,
the problem is I want to display the videos images in a GridView,
like snap shot of the video in a certain position, i tried the media element and it's not working
also an image and it doesn't make sense.
There might not be a pure C# solution to this problem. Perhaps SharpDX would let you do that, but I haven't tried and I don't know if it was done before. If you look at these two threads:
http://social.msdn.microsoft.com/Forums/fil-PH/wingameswithdirectx/thread/05731d4f-5b7f-4ed1-8e28-94604655139e
http://social.msdn.microsoft.com/Forums/en/wingameswithdirectx/thread/ffacd05c-6e9e-43bf-b691-99127240730c
-- you should see that there is a TransferVideoFrame method you can use to copy a frame from a video stream to a DirectX texture. The Media engine native C++ video playback sample shows how you can use it natively. If you search for TransferVideoFrame+SharpDX - you can find this sample that uses the SharpDX version of this method in C#. After you transfer that frame to a texture - you can either copy the contents of the texture to a WriteableBitmap using the Map and Unmap methods like here. You can also save it to a file either using BitmapEncoder a here or directly from the DirectX texture to a file using WIC or say SharpDX Toolkit as in here. You will probably want to build a cache of thumbnails to avoid having to process the videos every time you display the list, so saving to a file is something you should do anyway.
Your own solution that you quoted elsewhere that should work for at least some videos:
var thumb = await
storageFile.GetThumbnailAsync(
Windows.Storage.FileProperties.ThumbnailMode.PicturesView,
1000,
Windows.Storage.FileProperties.ThumbnailOptions.UseCurrentScale);
I suggest using GetThumbnailAsync(), which works just as well. You don't even need to get the video properties or anything like that. If the StorageFile is loaded, this should work for a normal video of any format.

Kinect RGB to AVI File

I am attempting to save a video file (e.g., .AVI file) from the Kinect's RGB stream.
I am using the latest Kinect SDK (1.5)
I tried using e2e's DirectShow filters and it didn't work, there must be a more straight forward approach.
I tried the colorBasics method of writing each frames as .png files. But, then the speed reduced to 6fps from 30fps. I guess, multi-threading may help to increase the performance. I dont know how to multi thread it, is there any online source of it. I searched a lot, but no luck yet.
Thanks.
The Kinect does not directly support recording straight off the video streams. You can accomplish it by accessing each frame, as it comes in, and creating an AVI (or other video format) in code.
The Kinect Toolbox has a video recorder, you can look at the source as a starting point.
You can easily extract WriteableBitmap objects from the Kinect RGB camera. From that you can find a lot of information on creating still images to produce an AVI, or produce an AVI directly.
Google Search: Create AVI from still images
Google Search: Create AVI from WPF WriteableBitmap
Have a look at the "Color Basics" example from the Kinect for Windows Developer Toolkit to see how to access the camera and extract data to a WritableBitmap object.

How to get YUV video data stream from web cameras in C# NOT USING DIRECT SHOW?

What I need is raw camera data Stream of images\Frames in YUV format. I need to get tham using C#. And NOT USING DIRECT SHOW (if possible at all). I need a live stream not something like 1 frame per 5 seconds. If It will be mono compateble it would be grate but win only will be ok. I use .Net 3.5 but if needed I can swich to 4.
I heard Silverlight 4 can grab YUV stream so how to do it from ragular C# programm?
So How to get YUV video data stream from web cameras in C# NOT USING DIRECT SHOW?
getting raw yuv data frome webcam without DirectShow might be impossible as far as I know.
Opencv VideoCapture class use DirectShow as well.
For DirectShow solutions: check here
I have a simpler solution(based on DirectShow as well) for cpp with opencv check the answer
The Expression Encoder SDK may do what you need depending on exactly what you need to do with the video.
try Emgu or OpenCvCSharp - both OpenCv wrappers which have capture functionality and will return you a RGB image.

Displaying live video from a raw uncompressed byte source in C#: WPF vs. Win forms

I have a live 16-bit gray-scale video stream that is pushed through a ring-buffer in memory as a raw, uncompressed byte stream (2 bytes per pixel, 2^18 pixels/frame, 32 frames/sec). (This is coming from a scientific grade camera, via a PCI frame-grabber). I would like to do some simple processing on the video (clip dynamic range, colorize, add overlays) and then show it in a window, using C#.
I have this working using Windows Forms & GDI (for each frame, build a Bitmap object, write raw 32-bit RGB pixel values based on my post-processing steps, and then draw the frame using the Graphics class). But this uses a significant chunk of CPU that I'd like to use for other things. So I'm interested in using WPF for its GPU-accelerated video display. (I'd also like to start using WPF for its data binding & layout features.)
But I've never used WPF before, so I'm unsure how to approach this. Most of what I find online about video & WPF involves reading a compressed video file from disk (e.g. WMV), or getting a stream from a consumer-grade camera using a driver layer that Windows already understands. So it doesn't seem to apply here (but correct me if I'm wrong about this).
So, my questions:
Is there a straighforward, WPF-based way to play video from raw, uncompressed bytes in memory (even if just as 8-bit grayscale, or 24-bit RGB)?
Will I need to build DirectShow filters (or other DirectShow/Media Foundation-ish things) to get the post-processing working on the GPU?
Also, any general advice / suggestions for documentation, examples, blogs, etc that are appropriate to these tasks would be appreciated. Thanks!
Follow-up: After some experimentation, I found WriteableBitmap to be fast enough for my needs, and extremely easy to use correctly: Simply call WritePixels() and any Image controls bound to it will update themselves. InteropBitmap with memory-mapped sections is noticeably faster, but I had to write p/invokes to kernel32.dll to use it on .NET 3.5.
My VideoRendererElement, though very efficient, does use some hackery to make it work. You may also want to experiment with the WriteableBitmap in .NET 3.5 SP1.
Also the InteropBitmap is very fast too. Much more efficient than the WB as it's not double buffered. Though it can be subject to video tearing.
Some further Google-searching yielded this:
http://www.codeplex.com/VideoRendererElement
which I'm looking into now, but may be the right approach here. Of course further thoughts/suggestions are still very much welcome.

Categories