I am creating a program that can dump individual frames from a video feed for a drones competition. I am having a problem where by the wireless video stream coming form the drone is flickering and flying all over the place.
I am using this code to capture the video stream:
Capture _capture;
Emgu.CV.Image<Emgu.CV.Structure.Bgr,byte> frame;
void StartCamera()
{
_capture = null;
_capture = new Capture((int)nudCamera.Value);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FPS, FrameRate);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, FrameHeight);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, FrameWidth);
webcam_frm_cnt = 0;
cam = 1;
Video_seek = 0;
System.Windows.Forms.Application.Idle += ProcessFrame;
}
private void ProcessFrame(object sender, EventArgs arg)
{
try
{
Framesno = _capture.GetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_FRAMES);
frame = _capture.QueryFrame();
if (frame != null)
{
pictureBox1.Image = frame.ToBitmap();
if (cam == 0)
{
Video_seek = (int)(Framesno);
double time_index = _capture.GetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_MSEC);
//Time_Label.Text = "Time: " + TimeSpan.FromMilliseconds(time_index).ToString().Substring(0, 8);
double framenumber = _capture.GetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_FRAMES);
//Frame_lbl.Text = "Frame: " + framenumber.ToString();
Thread.Sleep((int)(1000.0 / FrameRate));
}
if (cam == 1)
{
//Frame_lbl.Text = "Frame: " + (webcam_frm_cnt++).ToString();
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message.ToString());
}
}
Is there a setting somewhere that I am missing?
This video stream flickering seems to happen in other programs too however it fixes itself when you fiddle with the video setting (NTSC/PAL settings)
Edit: So I need to be able to put the video stream into NTSC /M mode, is this possible with EmguCV? If so how do I do it?
Edit 2: All documents that I have read point towards it being completely and utterly impossible to change the video type and that there is no documentation on this topic. I would love to be proved wrong :)
Thanks in advanced
This sourceforge link tells you how to set the video mode.
Related
Setup
Hey,
I'm trying to capture my screen and send/communicate the stream via MR-WebRTC. Communication between two PCs or PC with HoloLens worked with webcams for me, so I thought the next step could be streaming my screen. So I took the uwp application that I already had, which worked with my webcam and tried to make things work:
UWP App is based on the example uwp app from MR-WebRTC.
For Capturing I'm using the instruction from MS about screen capturing via GraphicsCapturePicker.
So now I'm stuck in the following situation:
I get a frame from the screen capturing, but its type is Direct3D11CaptureFrame. You can see it below in the code snipped.
MR-WebRTC takes a frame type I420AVideoFrame (also in a code snipped).
How can I "connect" them?
I420AVideoFrame wants a frame in the I420A format (YUV 4:2:0).
Configuring the framePool I can set the DirectXPixelFormat, but it has no YUV420.
I found this post on so, saying that it its possible.
Code Snipped Frame from Direct3D:
_framePool = Direct3D11CaptureFramePool.Create(
_canvasDevice, // D3D device
DirectXPixelFormat.B8G8R8A8UIntNormalized, // Pixel format
3, // Number of frames
_item.Size); // Size of the buffers
_session = _framePool.CreateCaptureSession(_item);
_session.StartCapture();
_framePool.FrameArrived += (s, a) =>
{
using (var frame = _framePool.TryGetNextFrame())
{
// Here I would take the Frame and call the MR-WebRTC method LocalI420AFrameReady
}
};
Code Snippet Frame from WebRTC:
// This is the way with the webcam; so LocalI420 was subscribed to
// the event I420AVideoFrameReady and got the frame from there
_webcamSource = await DeviceVideoTrackSource.CreateAsync();
_webcamSource.I420AVideoFrameReady += LocalI420AFrameReady;
// enqueueing the newly captured video frames into the bridge,
// which will later deliver them when the Media Foundation
// playback pipeline requests them.
private void LocalI420AFrameReady(I420AVideoFrame frame)
{
lock (_localVideoLock)
{
if (!_localVideoPlaying)
{
_localVideoPlaying = true;
// Capture the resolution into local variable useable from the lambda below
uint width = frame.width;
uint height = frame.height;
// Defer UI-related work to the main UI thread
RunOnMainThread(() =>
{
// Bridge the local video track with the local media player UI
int framerate = 30; // assumed, for lack of an actual value
_localVideoSource = CreateI420VideoStreamSource(
width, height, framerate);
var localVideoPlayer = new MediaPlayer();
localVideoPlayer.Source = MediaSource.CreateFromMediaStreamSource(
_localVideoSource);
localVideoPlayerElement.SetMediaPlayer(localVideoPlayer);
localVideoPlayer.Play();
});
}
}
// Enqueue the incoming frame into the video bridge; the media player will
// later dequeue it as soon as it's ready.
_localVideoBridge.HandleIncomingVideoFrame(frame);
}
I found a solution for my problem by creating an issue on the github repo. Answer was provided by KarthikRichie:
You have to use the ExternalVideoTrackSource
You can convert from the Direct3D11CaptureFrame to Argb32VideoFrame
// Setting up external video track source
_screenshareSource = ExternalVideoTrackSource.CreateFromArgb32Callback(FrameCallback);
struct WebRTCFrameData
{
public IntPtr Data;
public uint Height;
public uint Width;
public int Stride;
}
public void FrameCallback(in FrameRequest frameRequest)
{
try
{
if (FramePool != null)
{
using (Direct3D11CaptureFrame _currentFrame = FramePool.TryGetNextFrame())
{
if (_currentFrame != null)
{
WebRTCFrameData webRTCFrameData = ProcessBitmap(_currentFrame.Surface).Result;
frameRequest.CompleteRequest(new Argb32VideoFrame()
{
data = webRTCFrameData.Data,
height = webRTCFrameData.Height,
width = webRTCFrameData.Width,
stride = webRTCFrameData.Stride
});
}
}
}
}
catch (Exception ex)
{
}
}
private async Task<WebRTCFrameData> ProcessBitmap(IDirect3DSurface surface)
{
SoftwareBitmap softwareBitmap = await SoftwareBitmap.CreateCopyFromSurfaceAsync(surface, Windows.Graphics.Imaging.BitmapAlphaMode.Straight);
byte[] imageBytes = new byte[4 * softwareBitmap.PixelWidth * softwareBitmap.PixelHeight];
softwareBitmap.CopyToBuffer(imageBytes.AsBuffer());
WebRTCFrameData argb32VideoFrame = new WebRTCFrameData();
argb32VideoFrame.Data = GetByteIntPtr(imageBytes);
argb32VideoFrame.Height = (uint)softwareBitmap.PixelHeight;
argb32VideoFrame.Width = (uint)softwareBitmap.PixelWidth;
var test = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read);
int count = test.GetPlaneCount();
var pl = test.GetPlaneDescription(count - 1);
argb32VideoFrame.Stride = pl.Stride;
return argb32VideoFrame;
}
private IntPtr GetByteIntPtr(byte[] byteArr)
{
IntPtr intPtr2 = System.Runtime.InteropServices.Marshal.UnsafeAddrOfPinnedArrayElement(byteArr, 0);
return intPtr2;
}
I am processing frames received from Kinect v2 (Color and IR) in UWP. The program runs on remote machine (XBOX One S). The main goal is to get frames and write them to the disk with 30 fps for Color and IR to later process them further.
I am using the following code to check the frame rate:
public MainPage()
{
this.InitialiseFrameReader(); // initialises MediaCapture for IR and Color
}
const int COLOR_SOURCE = 0;
const int IR_SOURCE = 1;
private async void InitialiseFrameReader()
{
await CleanupMediaCaptureAsync();
var allGroups = await MediaFrameSourceGroup.FindAllAsync();
if (allGroups.Count == 0)
{
return;
}
_groupSelectionIndex = (_groupSelectionIndex + 1) % allGroups.Count;
var selectedGroup = allGroups[_groupSelectionIndex];
var kinectGroup = selectedGroup;
try
{
await InitializeMediaCaptureAsync(kinectGroup);
}
catch (Exception exception)
{
_logger.Log($"MediaCapture initialization error: {exception.Message}");
await CleanupMediaCaptureAsync();
return;
}
// Set up frame readers, register event handlers and start streaming.
var startedKinds = new HashSet<MediaFrameSourceKind>();
foreach (MediaFrameSource source in _mediaCapture.FrameSources.Values.Where(x => x.Info.SourceKind == MediaFrameSourceKind.Color || x.Info.SourceKind == MediaFrameSourceKind.Infrared)) //
{
MediaFrameSourceKind kind = source.Info.SourceKind;
MediaFrameSource frameSource = null;
int frameindex = COLOR_SOURCE;
if (kind == MediaFrameSourceKind.Infrared)
{
frameindex = IR_SOURCE;
}
// Ignore this source if we already have a source of this kind.
if (startedKinds.Contains(kind))
{
continue;
}
MediaFrameSourceInfo frameInfo = kinectGroup.SourceInfos[frameindex];
if (_mediaCapture.FrameSources.TryGetValue(frameInfo.Id, out frameSource))
{
// Create a frameReader based on the source stream
MediaFrameReader frameReader = await _mediaCapture.CreateFrameReaderAsync(frameSource);
frameReader.FrameArrived += FrameReader_FrameArrived;
_sourceReaders.Add(frameReader);
MediaFrameReaderStartStatus status = await frameReader.StartAsync();
if (status == MediaFrameReaderStartStatus.Success)
{
startedKinds.Add(kind);
}
}
}
}
private async Task InitializeMediaCaptureAsync(MediaFrameSourceGroup sourceGroup)
{
if (_mediaCapture != null)
{
return;
}
// Initialize mediacapture with the source group.
_mediaCapture = new MediaCapture();
var settings = new MediaCaptureInitializationSettings
{
SourceGroup = sourceGroup,
SharingMode = MediaCaptureSharingMode.SharedReadOnly,
StreamingCaptureMode = StreamingCaptureMode.Video,
MemoryPreference = MediaCaptureMemoryPreference.Cpu
};
await _mediaCapture.InitializeAsync(settings);
}
private void FrameReader_FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
using (var frame = sender.TryAcquireLatestFrame())
{
if (frame != null)
{
//Settings.cameraframeQueue.Enqueue(null, frame.SourceKind.ToString(), frame.SystemRelativeTime.Value); //Add to Queue to process frame
Debug.WriteLine(frame.SourceKind.ToString() + " : " + frame.SystemRelativeTime.ToString());
}
}
}
I am trying to debug the application to check the frame rate so I have removed further processing.
I am not sure if I am not calculating it properly or something else is wrong.
For example, System Relative Time from 04:37:06 to 04:37:48 gives :
IR:
Fps(Occurrence)
31(1)
30(36)
29(18)
28(4)
Color:
Fps(Occurrence)
30(38)
29(18)
28(3)
I want this frame rate to be constant (30 fps) and aligned so IR and Color and same number of frames for that time.
This does not include any additional code. As soon as I have a process queue or any sort of code, the fps decreases and ranges from 15 to 30.
Can anyone please help me with this?
Thank you.
UPDATE:
After some testing and working around, it has come to my notice that PC produces 30fps but XBOX One (remote device) on debug mode produces very low fps. This does however improve when running it on release mode but the memory allocated for UWP apps is quite low.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
XBOX One has maximum available memory of 1 GB for Apps and 5 for Games.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
While in PC the fps is 30 (as the memory has no such restrictions).
This causes the frame rate to drop. However, the fps did improve when running it on release mode or published to MS Store.
does not work Emgu.CV.Capture ()
public Form1() {
InitializeComponent();
grabber = new Emgu.CV.Capture();
grabber.QueryFrame();
Application.Idle += new EventHandler(FrameGrabber);
}
void FrameGrabber(object sender, EventArgs e){
currentFrame = grabber.QueryFrame();
if (currentFrame != null){
currentFrameCopy = currentFrame.Copy();
imageBoxFrameGrabber.Image = currentFrame;
}
}
can not get a picture .. tell me what I'm doing wrong
When you start up your camera capture you need to actually tell it what camera to use.
This line:
grabber = new Emgu.CV.Capture();
Requires you to tell it which camera, I would suggest changing it to this:
grabber = new Emgu.CV.Capture(0);
In theory it should open the deafult camera but it is worth being specific. On top of that
I am coding an WindowsForm application in C# using openCV. I want to capture video from the webcam and show it in a window and i want to do it using P/Invoke,i got c++ code but i don't how can i do it in c#.
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, COLOR_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
i found a link in github which will detect face from image Face Detect
exactly same way i want capture the video from webcam.any reference link will be helpfull.?
#region cameracapture
if (comboBox1.Text == "Capture From Camera")
{
try
{
_capture = null;
_capture = new Capture(0);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FPS, 30);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, 240);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, 320);
Time_Label.Text = "Time: ";
Codec_lbl.Text = "Codec: ";
Frame_lbl.Text = "Frame: ";
webcam_frm_cnt = 0;
cam = 1;
Video_seek.Value = 0;
Application.Idle += ProcessFrame;
button1.Text = "Stop";
comboBox1.Enabled = false;
}
catch (NullReferenceException excpt)
{
MessageBox.Show(excpt.Message);
}
}
#endregion cameracapture
Source: Codeproject
I just started Kinect programming and I am quite happy to have been able to display RGB and IR images at the same time.
Now using the screenshot button I am able to save each frame when I want. (same procedure as in the sample SDKs)
So now if I want to continuously save those frames how can I go about doing that?
I am new to C# and Kinect programming general. So can anyone help me?
Thanks;
just try:
private unsafe void saveFrame(Object reference)
{
MultiSourceFrame mSF = (MultiSourceFrame)reference;
using (var frame = mSF.DepthFrameReference.AcquireFrame())
{
if (frame != null)
{
using (Microsoft.Kinect.KinectBuffer depthBuffer = frame.LockImageBuffer())
{
if ((frame.FrameDescription.Width * frame.FrameDescription.Height) == (depthBuffer.Size / frame.FrameDescription.BytesPerPixel))
{
ushort* frameData = (ushort*)depthBuffer.UnderlyingBuffer;
byte[] rawDataConverted = new byte[(int)(depthBuffer.Size / 2)];
for (int i = 0; i < (int)(depthBuffer.Size / 2); ++i)
{
ushort depth = frameData[i];
rawDataConverted[i] = (byte)(depth >= frame.DepthMinReliableDistance && depth <= frame.DepthMaxReliableDistance ? (depth) : 0);
}
String date = string.Format("{0:hh-mm-ss}", DateTime.Now);
String filePath = System.IO.Directory.GetCurrentDirectory() + "/test/" +date+".raw";
File.WriteAllBytes(filePath, rawDataConverted);
rawDataConverted = null;
}
}
}
}
}
You also take a look here:
Saving raw detph-data