I'm using XAudio2 with SlimDX and I've managed to get it playing a short (~8second) wav file on loop, however as it approaches the end of the first loop, the audio begins to stutter and the stuttering continues into the next loop, getting worse and worse as time goes on.
I turned on the debug profile and in the output window I get these errors:
XAUDIO2: WARNING: Spent 5.63ms in the OnVoiceProcessingPassStart callback
XAUDIO2: WARNING: Spent 5.60ms in the OnVoiceProcessingPassStart callback
XAUDIO2: WARNING: Spent 5.59ms in the OnVoiceProcessingPassStart callback
XAUDIO2: WARNING: Spent 5.69ms in the OnVoiceProcessingPassStart callback
And these coincide with when the stuttering occurs. I am doing nothing in these callbacks (I haven't even added anything to the events), and yet it's slowing down. I've added my code below for reference:
Wave class for holding the data stream and the buffer:
public class Wave
{
public WaveStream Data { get; private set; }
public AudioBuffer Buffer { get; private set; }
public Wave(string path, bool repeating)
{
Data = new WaveStream(path);
Buffer = new AudioBuffer();
Buffer.AudioBytes = (int)Data.Length;
Buffer.AudioData = Data;
if (repeating)
{
Buffer.Flags = BufferFlags.EndOfStream;
}
else
{
Buffer.Flags = BufferFlags.None;
}
Buffer.PlayBegin = 0;
Buffer.PlayLength = 0;
Buffer.LoopBegin = 0;
Buffer.LoopCount = 100;
Buffer.LoopLength = 0;
}
}
Sound class for holding the XAudio engine and the voices, and to cover adding/removing voices:
public class Sound
{
private XAudio2 audio;
private MasteringVoice master;
private List<SourceVoice> sources;
public Sound()
{
audio = new XAudio2(XAudio2Flags.DebugEngine, ProcessorSpecifier.AnyProcessor);
master = new MasteringVoice(audio);
sources = new List<SourceVoice>();
}
public void AddSound(Wave wave)
{
SlimDX.Multimedia.WaveFormat format = wave.Data.Format;
SourceVoice source = new SourceVoice(audio, format);
source.Start();
source.SubmitSourceBuffer(wave.Buffer);
sources.Add(source);
}
}
And to run it, I use:
Wave wave = new Wave("Music2/untitled.wav", true);
Sound sound = new Sound();
sound.AddSound(wave);
Rather embarrassingly this one's my own fault. I had a D3D buffer being recreated and destroyed every frame that I'd forgotten to change to a dynamic buffer. This was causing my memory usage to balloon to several gigabytes in size, which meant there probably wasn't enough to allocate for the music.
Once I removed the memory leak, the music sounded fine.
I'm not sure if the XAudio2 exception is very well-worded though...
Related
This scene is common in real time video proccessing. And I need timestamps to synchronize with other devices.
I have tried cv::VideoCapture, but it can not extract the timestamps frome video stream.
So I have two questions here:
Does video stream provided by USB camera indeed contains the timestamp information ?
If it has. What should I do to extract it ? A C# solution is best, while C++ is OK.
Addition:
Using these two properties doesn't work:
secCounter = (long) cap.get(CAP_PROP_POS_MSEC);
frameNumber = (long) cap.get(CAP_PROP_POS_FRAMES);
It always gives the following result:
VIDEOIO ERROR: V4L2: getting property #1 is not supported
msecCounter = 0
frameNumber = -1
OpenCV's VideoCapture class is a very high level interface to retrieve frames from a camera, so it "hides" a lot of the details that are necessary to connect to the camera, retrieve frames from the camera, and decode those frames in to a useful color space like BGR. This is nice because you don't have to worry about the details of grabbing frames, but the downside is that you don't have direct access to other data you might want, like the frame number or frame timestamp. That doesn't mean it's impossible to get the data you want, though!
Here's a sample frame grabbing loop that will get you what you want, loosely based on the example code from here. This is in C++.
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
// TODO: change the width, height, and capture FPS to your desired
// settings.
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_PROP_FPS, 30);
Mat frame;
long msecCounter = 0;
long frameNumber = 0;
for(;;)
{
// Instead of cap >> frame; we'll do something different.
//
// VideoCapture::grab() tells OpenCV to grab a frame from
// the camera, but to not worry about all the color conversion
// and processing to convert that frame into BGR.
//
// This means there's less processing overhead, so the time
// stamp will be more accurate because we are fetching it
// immediately after.
//
// grab() should also wait for the next frame to be available
// based on the capture FPS that is set, so it's okay to loop
// continuously over it.
if(cap.grab())
{
msecCounter = (long) cap.get(CAP_PROP_POS_MSEC);
frameNumber = (long) cap.get(CAP_PROP_POS_FRAMES);
// VideoCapture::retrieve color converts the image and places
// it in the Mat that you provide.
if(cap.retrieve(&frame))
{
// Pass the frame and parameters to your processing
// method.
ProcessFrame(&frame, msecCounter, frameNumber);
}
}
// TODO: Handle your loop termination condition here
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
void ProcessFrame(Mat& frame, long msecCounter, long frameNumber)
{
// TODO: Make a copy of frame if you are going to process it
// asynchronously or put it in a buffer or queue and then return
// control from this function. This is because the reference Mat
// being passed in is "owned" by the processing loop, and on each
// iteration it will be destructed, so any references to it will be
// invalid. Hence, if you do any work async, you need to copy frame.
//
// If all your processing happens synchronously in this function,
// you don't need to make a copy first because the loop is waiting
// for this function to return.
// TODO: Your processing logic goes here.
}
If you're using C# and Emgu CV it will look a bit different. I haven't tested this code, but it should work or be very close to the solution.
using System;
using Emgu.CV;
using Emgu.CV.CvEnum;
static class Program
{
[STAThread]
static void Main()
{
VideoCapture cap = new VideoCapture(0);
if(!cap.IsOpened)
{
return;
}
cap.SetCaptureProperty(CapProp.FrameWidth, 1920);
cap.SetCaptureProperty(CapProp.FrameHeight, 1080);
cap.SetCaptureProperty(CapProp.Fps, 30);
Mat frame = new Mat();
long msecCounter = 0;
long frameNumber = 0;
for(;;)
{
if(cap.Grab())
{
msecCounter = (long) cap.GetCaptureProperty(CapProp.PosMsec);
frameNumber = (long) cap.GetCaptureProperty(CapProp.PosFrames);
if(cap.Retrieve(frame))
{
ProcessFrame(frame, msecCounter, frameNumber);
}
}
// TODO: Determine when to quit the processing loop
}
}
private static void ProcessFrame(Mat frame, long msecCounter, long frameNumber)
{
// Again, copy frame here if you're going to queue the frame or
// do any async processing on it.
// TODO: Your processing code goes here.
}
}
Emgu's VideoCapture implementation also allows for asynchronous Grab operations to be done for you, and notifications when a grabbed frame is ready to be used with Retrieve. That looks like this:
using System;
using Emgu.CV;
using Emgu.CV.CvEnum;
static class Program
{
private static Mat s_frame;
private static VideoCapture s_cap;
private static object s_retrieveLock = new object();
[STAThread]
static void Main()
{
s_cap = new VideoCapture(0);
if(!s_cap.IsOpened)
{
return;
}
s_frame = new Mat();
s_cap.SetCaptureProperty(CapProp.FrameWidth, 1920);
s_cap.SetCaptureProperty(CapProp.FrameHeight, 1080);
s_cap.SetCaptureProperty(CapProp.Fps, 30);
s_cap.ImageGrabbed += FrameIsReady;
s_cap.Start();
// TODO: Wait here until you're done with the capture process,
// the same way you'd determine when to exit the for loop in the
// above example.
s_cap.Stop();
s_cap.ImageGrabbed -= FrameIsReady;
}
private static void FrameIsReady(object sender, EventArgs e)
{
// This function is being called from VideoCapture's thread,
// so if you rework this code to run with a UI, be very careful
// about updating Controls here because that needs to be Invoke'd
// back to the UI thread.
// I used a lock here to be extra careful and protect against
// re-entrancy, but this may not be necessary if Emgu's
// VideoCapture thread blocks for completion of this event
// handler.
lock(s_retrieveLock)
{
msecCounter = (long) s_cap.GetCaptureProperty(CapProp.PosMsec);
frameNumber = (long) s_cap.GetCaptureProperty(CapProp.PosFrames);
if(s_cap.Retrieve(s_frame))
{
ProcessFrame(s_frame, msecCounter, frameNumber);
}
}
}
private static void ProcessFrame(Mat frame, long msecCounter, long frameNumber)
{
// Again, copy frame here if you're going to queue the frame or
// do any async processing on it.
// TODO: Your processing code goes here.
}
}
I am writing an application in Unity which will be required to capture an image from a camera every frame (at ~60fps), and send the resultant data to another service running locally.
The issue is, I am aware that capturing the rendered data from the camera can cause massive frame rate drops (as explained in this article) when using the GetPixels() method. The article explains that "GetPixels() blocks for ReadPixels() to complete" and "ReadPixels() blocks while flushing the GPU" which is why the GPU and CPU have to sync up, resulting in a lag.
I have produced a sample project with a script attached which simply outputs frames to a file as a PNG to replicate the functionality of the program I wish to create. I have done my best to implement what is described in the article, namely allowing the GPU to render a frame, then wait a few frames before calling GetPixels() so as not to cause the GPU and CPU to forcefully sync up. However, I really haven't made any progress with it. The project still plays at about 10-15fps.
How can I achieve a realtime capture of 60 frames per second in Unity?
using System;
using System.Collections;
using System.IO;
using UnityEngine;
namespace Assets
{
public class MyClass: MonoBehaviour
{
private const float reportInterval = 0.5f;
private int screenshotCount = 0;
private const float maxElapsedSecond = 20;
private string screenshotsDirectory = "UnityHeadlessRenderingScreenshots";
public Camera camOV;
public RenderTexture currentRT;
private int frameCount = 0;
private Texture2D resultantImage;
public void Start()
{
camOV.forceIntoRenderTexture = true;
if (Directory.Exists(screenshotsDirectory))
{
Directory.Delete(screenshotsDirectory, true);
}
if (!Application.isEditor)
{
Directory.CreateDirectory(screenshotsDirectory);
camOV.targetTexture = currentRT;
}
}
// Update is called once per frame
public void Update()
{
//Taking Screenshots
frameCount += 1;
if (frameCount == 1)
{
TakeScreenShot();
}
else if (frameCount == 3)
{
ReadPixelsOut("SS_"+screenshotCount+".png");
}
if (frameCount >= 3)
{
frameCount = 0;
}
}
public void TakeScreenShot()
{
screenshotCount += 1;
RenderTexture.active = camOV.targetTexture;
camOV.Render();
resultantImage = new Texture2D(camOV.targetTexture.width, camOV.targetTexture.height, TextureFormat.RGB24, false);
resultantImage.ReadPixels(new Rect(0, 0, camOV.targetTexture.width, camOV.targetTexture.height), 0, 0);
resultantImage.Apply();
}
private void ReadPixelsOut(string filename)
{
if (resultantImage != null)
{
resultantImage.GetPixels();
RenderTexture.active = currentRT;
byte[] bytes = resultantImage.EncodeToPNG();
// save on disk
var path = screenshotsDirectory + "/" + filename;
File.WriteAllBytes(path, bytes);
Destroy(resultantImage);
}
}
}
}
The article implies that it is possible, but I haven't managed to get it to work.
Many thanks in advance for your help.
I am not sure if OP still need the answer. But in case someone in the future getting the same problem, Let me share what i found.
https://github.com/unity3d-jp/FrameCapturer
This is a plugin designed for rendering animation video in Unity editor. But it can also work in standalone. In my case, i take some part of it, and make my app stream Motion Jpeg. I did it with 30fps, never tried 60fps
I'm writing a simple optical measurement application using Xamarin for Android and the OpenCV C# bindings library.
In an effort to separate the frame grabber from the processing, I've created some blocking collections to pass around raw, and then processed imagery between different threads. I have an issue where over the period of about 30 seconds, the GUI shows beautifully smooth processed video (15s) down to choppy video (10s), then a crash.
The code below shows the definition of the collections. OnCameraFrame (bottom of the code) shoves each new frame into camframes collection. In OnCreate, I run a task called CamProcessor that takes the frame, does many things, and stuffs it into outframes collection. OnCameraFrame then takes that processed frame and shows it to the GUI. For the purposes of this post and testing, I've completely commented out all my processing, so this issue exists simply by passing raw data through the collections.
One other note is that my collections seem to be running very fast. At no point do I ever have more than 1 frame in there, so it's not an overflow issue (I think).
Can anyone point to why this strategy isn't working well?
BlockingCollection<Mat> camframes = new BlockingCollection<Mat>(10);
BlockingCollection<Mat> outframes = new BlockingCollection<Mat>(10);
public CameraBridgeViewBase mOpenCvCameraView { get; private set; }
protected override void OnCreate(Bundle savedInstanceState)
{
//LayoutStuff
mOpenCvCameraView = FindViewById<CameraBridgeViewBase>(Resource.Id.squish_cam);
Task.Run(() => camProcessor());
}
public void camProcessor()
{
while (!camframes.IsCompleted)
{
Mat frame = new Mat();
try
{
frame = camframes.Take();
}
catch (InvalidOperationException) { }
Mat frameT = frame.T();
Core.Flip(frame.T(), frameT, 1);
Imgproc.Resize(frameT, frameT, frame.Size());
outframes.Add(frameT);
}
}
public Mat OnCameraFrame(CameraBridgeViewBase.ICvCameraViewFrame inputFrame)
{
mRgba = inputFrame.Rgba();
Mat frame = new Mat();
Task.Run(() => camframes.Add(mRgba));
try
{
frame = outframes.Take();
}
catch (InvalidOperationException) { }
return frame;
}
After looking into the Android SDK monitor.bat output, I discovered this was a memory leak. Turns out it's common to the Java openCV wrapper, and is a result of OpenCV's mat heap being much larger than C# expects it to be, so it's not getting garbage collected.
The solution was to append these at every frame grab:
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
GC.WaitForPendingFinalizers();
I capture images from a webcam, do some heavy processing on them, and then show the result. To keep the framerate high, i want to have the processing of different frames run in parallel.
So, I have a 'Producer', which captures the images and adds these to the 'inQueue'; also it takes an image from the 'outQueue' and displays it:
public class Producer
{
Capture capture;
Queue<Image<Bgr, Byte>> inQueue;
Queue<Image<Bgr, Byte>> outQueue;
Object lockObject;
Emgu.CV.UI.ImageBox screen;
public int frameCounter = 0;
public Producer(Emgu.CV.UI.ImageBox screen, Capture capture, Queue<Image<Bgr, Byte>> inQueue, Queue<Image<Bgr, Byte>> outQueue, Object lockObject)
{
this.screen = screen;
this.capture = capture;
this.inQueue = inQueue;
this.outQueue = outQueue;
this.lockObject = lockObject;
}
public void produce()
{
while (true)
{
lock (lockObject)
{
inQueue.Enqueue(capture.QueryFrame());
if (inQueue.Count == 1)
{
Monitor.PulseAll(lockObject);
}
if (outQueue.Count > 0)
{
screen.Image = outQueue.Dequeue();
}
}
frameCounter++;
}
}
}
There are different 'Consumers' who take an image from the inQueue, do some processing, and add them to the outQueue:
public class Consumer
{
Queue<Image<Bgr, Byte>> inQueue;
Queue<Image<Bgr, Byte>> outQueue;
Object lockObject;
string name;
Image<Bgr, Byte> image;
public Consumer(Queue<Image<Bgr, Byte>> inQueue, Queue<Image<Bgr, Byte>> outQueue, Object lockObject, string name)
{
this.inQueue = inQueue;
this.outQueue = outQueue;
this.lockObject = lockObject;
this.name = name;
}
public void consume()
{
while (true)
{
lock (lockObject)
{
if (inQueue.Count == 0)
{
Monitor.Wait(lockObject);
continue;
}
image = inQueue.Dequeue();
}
// Do some heavy processing with the image
lock (lockObject)
{
outQueue.Enqueue(image);
}
}
}
}
Rest of the important code is this section:
private void Form1_Load(object sender, EventArgs e)
{
Consumer[] c = new Consumer[consumerCount];
Thread[] t = new Thread[consumerCount];
Object lockObj = new object();
Queue<Image<Bgr, Byte>> inQueue = new Queue<Image<Bgr, Byte>>();
Queue<Image<Bgr, Byte>> outQueue = new Queue<Image<Bgr, Byte>>();
p = new Producer(screen1, capture, inQueue, outQueue, lockObj);
for (int i = 0; i < consumerCount; i++)
{
c[i] = new Consumer(inQueue, outQueue, lockObj, "c_" + Convert.ToString(i));
}
for (int i = 0; i < consumerCount; i++)
{
t[i] = new Thread(c[i].consume);
t[i].Start();
}
Thread pt = new Thread(p.produce);
pt.Start();
}
The parallelisation actually works fine, I do get a linear speed increase with each added thread (up to a certain point of course). The problem is that I get artifacts in the output, even if running only one thread. The artifacts look like part of the picture is not in the right place.
Example of the artifact (this is without any processing to keep it clear, but the effect is the same)
Any ideas what causes this?
Thanks
Displaimer: This post isn't supposed to fully describe an answer, but instead give some hints on why the artifact is being shown.
A quick analysis show that the the actifact is, in fact, a partial, vertically mirrored snippet of a frame. I copied it, mirrored, and placed it back over the image, and added an awful marker to show its placement:
Two things immediately come to attention:
The artifact is roughly positioned on the 'correct' place it would be, only that the position is also vertically mirrored;
The image is slightly different, indicating that it may belong to a different frame.
It's been a while since I played around with raw capture and ran into a similar issue, but I remember that depending on how the driver is implemented (or set up - this particular issue happened when setting a specific imaging device for interlaced capture) it may fill its framebuffer alternating between 'top-down' and 'bottom-up' scans - as soon as the frame is full, the 'cursor' reverts direction.
It seems to me that you're running into a race condition/buffer underrun situation, where the transfer from the framebuffer to your application is happening before the full frame is transferred by the device.
In that case, you'd receive a partial image, and the area still not refreshed would show a bit of the previously transferred frame.
If I'd have to bet, I'd say that the artifact may appear on sequential order, not on the same position but 'fluctuating' on a specific direction (up or down), but always as a mirrored bit.
Well, I think the problem is here . The section of code is not guarantee that you will be access by one thread in here between two queue. The image is pop by inQueue is not actually received in order in outQueue
while (true)
{
lock (lockObject)
{
if (inQueue.Count == 0)
{
Monitor.Wait(lockObject);
continue;
}
image = inQueue.Dequeue();
}
// Do some heavy processing with the image
lock (lockObject)
{
outQueue.Enqueue(image);
}
}
Similar to #OnoSendai, I'm not trying to solve the exact problem as stated. I would have to write an app and I just don't have the time. But, the two things that I would change right away would be to use the ConcurrentQueue class so that you have thread-safety. And, I would use the Task library functions in order to create parallel tasks on different processor cores. These are found in the System.Net and System.Net.Task namespaces.
Also, vertically flipping a chunk like that looks like more than an artifact to me. If it also happens when executing in a single thread as you mentioned, then I would definitely re-focus on the "heavy processing" part of the equation.
Good luck! Take care.
You may have two problems:
1) parallism doesn't ensure that images are added to the out queue in the right order. I imagine that displaying image 8 before image 6 and 7 can produce some artifacts. In consumer thread, you have to wait previous consumer have posted its image to the out queue to post next image. Tasks can help greatly for that because of their inherent synchronisation mecanism.
2) You may also have problems in the rendering code.
I need to play single sound repeatedly in my app, for example, a gunshot, using XAudio2.
This is the part of the code I wrote for that purpose:
public sealed class X2SoundPlayer : IDisposable
{
private readonly WaveStream _stream;
private readonly AudioBuffer _buffer;
private readonly SourceVoice _voice;
public X2SoundPlayer(XAudio2 device, string pcmFile)
{
var fileStream = File.OpenRead(pcmFile);
_stream = new WaveStream(fileStream);
fileStream.Close();
_buffer = new AudioBuffer
{
AudioData = _stream,
AudioBytes = (int) _stream.Length,
Flags = BufferFlags.EndOfStream
};
_voice = new SourceVoice(device, _stream.Format);
}
public void Play()
{
_voice.SubmitSourceBuffer(_buffer);
_voice.Start();
}
public void Dispose()
{
_stream.Close();
_stream.Dispose();
_buffer.Dispose();
_voice.Dispose();
}
}
The code above is actually based on SlimDX sample.
What it does now is, when I call Play() repeatedly, the sound plays like:
sound -> sound -> sound
So it just fills the buffer and plays it.
But, I need to be able to play the same sound while the current one is playing, so effectively these two or more should mix and play at the same time.
Is there something here that I've missed, or it's not possible with my current solution (perhaps SubmixVoices could help)?
I'm trying to find something related in docs, but I've had no success, and there are not many examples online I could reference.
Thanks.
Although it's better option to use XACT for this purpose because it supports sound cues (exactly what I needed), I did managed to get it working this way.
I've changed the code so it would always create new SourceVoice object from the stream and play it.
// ------ code piece
/// <summary>
/// Gets the available voice.
/// </summary>
/// <returns>New SourceVoice object is always returned. </returns>
private SourceVoice GetAvailableVoice()
{
return new SourceVoice(_player.GetDevice(), _stream.Format);
}
/// <summary>
/// Plays this sound asynchronously.
/// </summary>
public void Play()
{
// get the next available voice
var voice = GetAvailableVoice();
if (voice != null)
{
// submit new buffer and start playing.
voice.FlushSourceBuffers();
voice.SubmitSourceBuffer(_buffer);
voice.Start();
}
}