Good evening guys,
I have issue with multithreading. I'm doing multiple audio samples collection using a parallel.foreach. I want to be able to do the collection simultaneously. I'm doing a sort of a producer consumer pattern. But the producer section, which is the audio samples collection is hanging soft of.
In each of the parallel threads:
A blocking collection is created to collect audio samples
A progress bar is created to monitor mic input
Lastly a Record function for recording/collecting audio input
I created a blocking collection array for each process, and using naudio WaveInEvent for recording from mic.
The challenge i'm facing is that
The program does not resume when I minimize the window
Sometimes the program hangs, other times it takes a while before hanging, but overall, the responsiveness is not good at all (Jerky)
Apart from all these the program is working fine.
What can I do for better performance.
Please check my code below. Thanks
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Data;
using System.Data.OleDb;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using System.Windows.Forms;
using Microsoft.Win32;
using NAudio.Wave;
public partial class frmAudioDetector : Form
{
//static BlockingCollection<AudioSamples> realtimeSource;
BlockingCollection<AudioSamples>[] realtimeSource;
static WaveInEvent waveSource;
static readonly int sampleRate = 5512;
private void frmAudioDetector_Load(object sender, EventArgs e)
{
try
{
var tokenSource = new CancellationTokenSource();
TabControl.TabPageCollection pages = tabControl1.TabPages;
//Tabs pages.Count is 8
List<int> integerList = Enumerable.Range(0, pages.Count).ToList();
//Parallel.foreach for simultaneous threads at the same time
Parallel.ForEach<int>(integerList, i =>
{
realtimeSource[i] = new BlockingCollection<AudioSamples>();
var firstProgressBar = (from t in pages[i].Controls.OfType<ProgressBar>()
select t).FirstOrDefault();
var firstEmptyComboBox = (from c in pages[i].Controls.OfType<ComboBox>()
select c).FirstOrDefault();
int deviceNum = firstEmptyComboBox.SelectedIndex;
//create a separate task for each tab for recording
_ = Task.Factory.StartNew(() => RecordMicNAudio(deviceNum, firstProgressBar, i));
});
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
//Record audio or store audio samples
void RecordMicNAudio(int deviceNum, ProgressBar progressBar, int t)
{
waveSource = new WaveInEvent();
waveSource.DeviceNumber = deviceNum;
waveSource.WaveFormat = new NAudio.Wave.WaveFormat(rate: sampleRate, bits: 16, channels: 1);
waveSource.DataAvailable += (_, e) =>
{
// using short because 16 bits per sample is used as input wave format
short[] samples = new short[e.BytesRecorded / 2];
Buffer.BlockCopy(e.Buffer, 0, samples, 0, e.BytesRecorded);
// converting to [-1, +1] range
float[] floats = Array.ConvertAll(samples, (sample => (float)sample / short.MaxValue));
//collect realtime audio samples
realtimeSource[t].Add(new AudioSamples(floats, string.Empty, sampleRate));
//Display volume meter in progress bar below
float maxValue = 32767;
int peakValue = 0;
int bytesPerSample = 2;
for (int index = 0; index < e.BytesRecorded; index += bytesPerSample)
{
int value = BitConverter.ToInt16(e.Buffer, index);
peakValue = Math.Max(peakValue, value);
}
var fraction = peakValue / maxValue;
int barCount = 35;
if (progressBar.InvokeRequired)
{
Action action = () => progressBar.Value = (int)(barCount * fraction);
this.BeginInvoke(action);
}
else progressBar.Value = (int)(barCount * fraction);
};
waveSource.RecordingStopped += (_, _) => Debug.WriteLine("Sound Stopped! Cannot capture sound from device...");
waveSource.BufferMilliseconds = 1000;
waveSource.StartRecording();
}
}
To access pages[i].Controls from a non-UI thread (e.g. a threadpool threat that will come from Parallel.ForEach) seems wrong.
The NAudio object already uses event driven programming (you provide a DataAvailable handler and RecordingStopped handler) So there's no need to do that setup work in parallel or in any thread other than the main UI thread.
I would not invoke the volume indicator (progress bar) update directly from the DataAvailable handler. Rather I'd update the control on a Timer tick, and just update a shared variable in the DataAvailable handler. This is event driven programming -- there are no threads or tasks are required that I can see apart from the ones that are already used by the wave source IO threads.
e.g.: Use a variable with a simple lock. Access to this data must be governed with a lock because the DataAvailable handler will be invoked on an IO thread to store the current volume, but the value read on the UI thread by the Timer Tick handler, which you can update at a modest rate, certainly no faster than your screen refresh rate. 4 or 5 times per second is likely frequently enough. Your BufferMilliseconds is already 1000 milliseconds (one second) so you may only be getting sample buffers once per second anyway.
Form-level fields
object sharedAccess = new object();
float sharedVolume;
WaveInEvent DataAvailable handler
lock (sharedAccess) {
sharedVolume= ...;
}
Timer Tick handler
int volume = 0;
lock(sharedAccess) {
volume = sharedVolume;
}
progressBar.Value = /* something based on...*/ volume;
Goals:
do a little work as possible in all event handlers on the UI thread.
do as little work as possible in the IO threads (WaveInEvent.DataAvailable handler).
minimize synchronization between threads when it must occur (try to block the other thread for the least amount of time possible).
With the volume meter out of the way, I'm similarly suspicious about how BlockingCollection coordinates access when calling realtimeSource[t].Add(new AudioSamples(floats, string.Empty, sampleRate)); Who is reading from this collection? There is contention here between the IO thread on which the bytes are arriving and whatever thread may be consuming these samples.
If you are just recording the samples and don't need to use them until recording is complete, then you don't need any kind of special synchronization or blocking collection to accumulate each buffer into a collection. You can just add the buffers into a traditional List - provided you don't access it from any other threads until recording is completed. There may be overhead in managing access to that blocking collection that you don't need to incur.
I get a little nervous when I see all the format conversion you've done:
from incoming buffer to array of shorts
from array of shorts to array of floats
from array of floats to AudioSamples object
from incoming buffer to Int16 during the volume computation (Why not use re-use your array of shorts?)
Seems like you could cut out a buffer copy here by going directly from the incoming format to the array of floats, and perform your volume computation on the array of floats or on a pinned pointer to the original buffer data in unsafe mode.
Related
private void RunEveryTenFrames(Color32[] pixels, int width, int height)
{
var thread = new Thread(() =>
{
Perform super = new HeavyOperation();
if (super != null)
{
Debug.Log("Result: " + super);
ResultHandler.handle(super);
}
});
thread.Start();
}
I'm running this function every 10 frames in Unity. Is this a bad idea. Also, when I try to add thread.Abort() inside the thread, it says thread is not defined and can't use local variable before it's defined error.
Is it a good idea to create a new thread every 10 frames in Unity?
No. 10 frames is too small for repeatedly creating new Thread.
Creating new Thread will cause overhead each time. It's not bad when done once in a while. It is when done every 10 frames. Remember this is not every 10 seconds. It is every 10 frames.
Use ThreadPool. By using ThreadPool with ThreadPool.QueueUserWorkItem, you are re-using Thread that already exist in the System in instead of creating new ones each time.
Your new RunEveryTenFrames function with ThreadPool should look something like this:
private void RunEveryTenFrames(Color32[] pixels, int width, int height)
{
//Prepare parameter to send to the ThreadPool
Data data = new Data();
data.pixels = pixels;
data.width = width;
data.height = height;
ThreadPool.QueueUserWorkItem(new WaitCallback(ExtractFile), data);
}
private void ExtractFile(object a)
{
//Retrive the parameters
Data data = (Data)a;
Perform super = new HeavyOperation();
if (super != null)
{
Debug.Log("Result: " + super);
ResultHandler.handle(super);
}
}
public struct Data
{
public Color32[] pixels;
public int width;
public int height;
}
I you ever need to call into Unity's API or use Unity's API from this Thread, see my other post or how to do that.
This scene is common in real time video proccessing. And I need timestamps to synchronize with other devices.
I have tried cv::VideoCapture, but it can not extract the timestamps frome video stream.
So I have two questions here:
Does video stream provided by USB camera indeed contains the timestamp information ?
If it has. What should I do to extract it ? A C# solution is best, while C++ is OK.
Addition:
Using these two properties doesn't work:
secCounter = (long) cap.get(CAP_PROP_POS_MSEC);
frameNumber = (long) cap.get(CAP_PROP_POS_FRAMES);
It always gives the following result:
VIDEOIO ERROR: V4L2: getting property #1 is not supported
msecCounter = 0
frameNumber = -1
OpenCV's VideoCapture class is a very high level interface to retrieve frames from a camera, so it "hides" a lot of the details that are necessary to connect to the camera, retrieve frames from the camera, and decode those frames in to a useful color space like BGR. This is nice because you don't have to worry about the details of grabbing frames, but the downside is that you don't have direct access to other data you might want, like the frame number or frame timestamp. That doesn't mean it's impossible to get the data you want, though!
Here's a sample frame grabbing loop that will get you what you want, loosely based on the example code from here. This is in C++.
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
// TODO: change the width, height, and capture FPS to your desired
// settings.
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_PROP_FPS, 30);
Mat frame;
long msecCounter = 0;
long frameNumber = 0;
for(;;)
{
// Instead of cap >> frame; we'll do something different.
//
// VideoCapture::grab() tells OpenCV to grab a frame from
// the camera, but to not worry about all the color conversion
// and processing to convert that frame into BGR.
//
// This means there's less processing overhead, so the time
// stamp will be more accurate because we are fetching it
// immediately after.
//
// grab() should also wait for the next frame to be available
// based on the capture FPS that is set, so it's okay to loop
// continuously over it.
if(cap.grab())
{
msecCounter = (long) cap.get(CAP_PROP_POS_MSEC);
frameNumber = (long) cap.get(CAP_PROP_POS_FRAMES);
// VideoCapture::retrieve color converts the image and places
// it in the Mat that you provide.
if(cap.retrieve(&frame))
{
// Pass the frame and parameters to your processing
// method.
ProcessFrame(&frame, msecCounter, frameNumber);
}
}
// TODO: Handle your loop termination condition here
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
void ProcessFrame(Mat& frame, long msecCounter, long frameNumber)
{
// TODO: Make a copy of frame if you are going to process it
// asynchronously or put it in a buffer or queue and then return
// control from this function. This is because the reference Mat
// being passed in is "owned" by the processing loop, and on each
// iteration it will be destructed, so any references to it will be
// invalid. Hence, if you do any work async, you need to copy frame.
//
// If all your processing happens synchronously in this function,
// you don't need to make a copy first because the loop is waiting
// for this function to return.
// TODO: Your processing logic goes here.
}
If you're using C# and Emgu CV it will look a bit different. I haven't tested this code, but it should work or be very close to the solution.
using System;
using Emgu.CV;
using Emgu.CV.CvEnum;
static class Program
{
[STAThread]
static void Main()
{
VideoCapture cap = new VideoCapture(0);
if(!cap.IsOpened)
{
return;
}
cap.SetCaptureProperty(CapProp.FrameWidth, 1920);
cap.SetCaptureProperty(CapProp.FrameHeight, 1080);
cap.SetCaptureProperty(CapProp.Fps, 30);
Mat frame = new Mat();
long msecCounter = 0;
long frameNumber = 0;
for(;;)
{
if(cap.Grab())
{
msecCounter = (long) cap.GetCaptureProperty(CapProp.PosMsec);
frameNumber = (long) cap.GetCaptureProperty(CapProp.PosFrames);
if(cap.Retrieve(frame))
{
ProcessFrame(frame, msecCounter, frameNumber);
}
}
// TODO: Determine when to quit the processing loop
}
}
private static void ProcessFrame(Mat frame, long msecCounter, long frameNumber)
{
// Again, copy frame here if you're going to queue the frame or
// do any async processing on it.
// TODO: Your processing code goes here.
}
}
Emgu's VideoCapture implementation also allows for asynchronous Grab operations to be done for you, and notifications when a grabbed frame is ready to be used with Retrieve. That looks like this:
using System;
using Emgu.CV;
using Emgu.CV.CvEnum;
static class Program
{
private static Mat s_frame;
private static VideoCapture s_cap;
private static object s_retrieveLock = new object();
[STAThread]
static void Main()
{
s_cap = new VideoCapture(0);
if(!s_cap.IsOpened)
{
return;
}
s_frame = new Mat();
s_cap.SetCaptureProperty(CapProp.FrameWidth, 1920);
s_cap.SetCaptureProperty(CapProp.FrameHeight, 1080);
s_cap.SetCaptureProperty(CapProp.Fps, 30);
s_cap.ImageGrabbed += FrameIsReady;
s_cap.Start();
// TODO: Wait here until you're done with the capture process,
// the same way you'd determine when to exit the for loop in the
// above example.
s_cap.Stop();
s_cap.ImageGrabbed -= FrameIsReady;
}
private static void FrameIsReady(object sender, EventArgs e)
{
// This function is being called from VideoCapture's thread,
// so if you rework this code to run with a UI, be very careful
// about updating Controls here because that needs to be Invoke'd
// back to the UI thread.
// I used a lock here to be extra careful and protect against
// re-entrancy, but this may not be necessary if Emgu's
// VideoCapture thread blocks for completion of this event
// handler.
lock(s_retrieveLock)
{
msecCounter = (long) s_cap.GetCaptureProperty(CapProp.PosMsec);
frameNumber = (long) s_cap.GetCaptureProperty(CapProp.PosFrames);
if(s_cap.Retrieve(s_frame))
{
ProcessFrame(s_frame, msecCounter, frameNumber);
}
}
}
private static void ProcessFrame(Mat frame, long msecCounter, long frameNumber)
{
// Again, copy frame here if you're going to queue the frame or
// do any async processing on it.
// TODO: Your processing code goes here.
}
}
I am developing C# WPF Auto Number Plate Recognition Using an OCR.
The Flow is, i am getting a pictures from a video stream MJPEG and this images should be passed to the OCR to get the plate number and other details.
The problem is : the Video stream is producing about 30 Frame/second and the CPU can't handle this much of processing also it will take around 1 Sec to process 1 frame, Also when i will get many frames on the Queue the CPU will be 70% used (Intel I7 4th G).
Can anyone suggest solution and better implementation.
//This is the queue where it will hold the frames
// produced from the video streaming(video_Newfram1)
private readonly Queue<byte[]> _anpr1Produces = new Queue<byte[]>();
//I am using AForg.Video to read the MJPEG Streaming
//this event will be triggered for every frame
private void video_NewFrame1(object sender, NewFrameEventArgs eventArgs)
{
var frameDataAnpr = new Bitmap(eventArgs.Frame);
AnprCam1.Source = GetBitmapimage(frameDataAnpr);
//add current fram to the queue
_anpr1Produces.Enqueue(imgByteAnpr);
//this worker is the consumer that will
//take the frames from the queue to the OCR processing
if (!_workerAnpr1.IsBusy)
{
_workerAnpr1.RunWorkerAsync(imgByteAnpr);
}
}
//This is the consumer, it will take the frames from the queue to the OCR
private void WorkerAnpr1_DoWork(object sender, DoWorkEventArgs e)
{
while (true)
{
if (_anpr1Produces.Count <= 0) continue;
BgWorker1(_anpr1Produces.Dequeue());
}
}
//This method will process the frames that sent from the consumer
private void BgWorker1(byte[] imageByteAnpr)
{
var anpr = new cmAnpr("default");
var objgxImage = new gxImage("default");
if (imageByteAnpr != null)
{
objgxImage.LoadFromMem(imageByteAnpr, 1);
if (anpr.FindFirst(objgxImage) && anpr.GetConfidence() >= Configs.ConfidanceLevel)
{
var vehicleNumber = anpr.GetText();
var vehicleType = anpr.GetType().ToString();
if (vehicleType == "0") return;
var imagename = string.Format("{0:yyyy_MMM_dd_HHmmssfff}", currentDateTime) + "-1-" +
vehicleNumber + ".png";
//this task will run async to do the rest of the process which is saving the vehicle image, getting vehicle color, storing to the database ... etc
var tsk = ProcessVehicle("1", vehicleType, vehicleNumber, imageByteAnpr, imagename, currentDateTime, anpr, _anpr1Produces);
}
else
{
GC.Collect();
}
}
}
What you should do is this:
First, figure out if a frame is worth processing or not. If you're using a compressed video stream, you can usually quickly read the frame's compressed size. It stores the difference between the current frame and the previous one.
When it's small, not much changed (i.e: no car drove by).
That's a low-tech way to do motion detection, without even having to decode a frame, and it should be extremely fast.
That way, you can probably decide to skip 80% of the frames in a couple of milliseconds.
Once and a while you'll get frames that need processing. Make sure that you can buffer enough frames so that you can keep recording while you're doing your slow processing.
The next thing to do is find a region of interest, and focus on those first. You could do that by simply looking at areas where the color changed, or try to find rectangular shapes.
Finally, one second of processing is SLOW if you need to process 30 fps. You need to make things faster, or you'll have to build up a gigantic buffer, and hope that you'll ever catch up if it's busy on the road.
Make sure to make proper use of multiple cores if they are available, but in the end, knowing which pieces of the image are NOT relevant is the key to faster performance here.
We have such a situation. We have a canvas, on which some ammount of figures are rendered. It may be 1 or many more (for example thousand) and we need to animate their translation to another location (on button click) using storyboard:
internal void someStoryBoard(figure someFigure, double coordMoveToValue)
{
string sbName = "StoryBoard_" + figure.ID;
string regName = "figure_" + figure.ID;
try
{
cnvsGame.Resources.Remove(sbName);
cnvsGame.UnregisterName(regName);
}
catch{ }
someCanvas.RegisterName(regName, someFigure.Geometry);
var moveFigureYAnimation = new PointAnimation();
moveFigureYAnimation.From = new Point(someFigure.Geometry.Center.X, someFigure.Geometry.Center.Y);
moveFigureYAnimation.To = new Point(someFigure.eGeometry.Center.X, coordMoveToValue);
moveFigureYAnimation.Duration = TimeSpan.FromSeconds(0.5);
var sbFigureMove = new Storyboard();
Storyboard.SetTargetName(sbFigureMove, regName);
Storyboard.SetTargetProperty(sbFigureMove, new PropertyPath(Geometry.CenterProperty));
sbFigureMove.Children.Add(moveFigureYAnimation);
cnvsGame.Resources.Add(sbName, sbFigureMove);
sbFigureMove.Begin();
}
Figures are stored in list. We are calling this StoryBoard using for loop:
for(int i = 0; i<listOfFigures.Count; i++)
{
someStoryBoard(listOfFigures[i], someCoord);
}
But here's the problem: if we have a little amount of figures - code completes quickly. But if ammount is big - there is a delay after a button is clicked and before the figures begin to move.
So, here's the question: is it possible to call someStoryBoard method asynchronously? Is next algorithm possible -> When someStoryBoard is called it begins to move figure instantly, not waiting for whole for loop to complete.?
You can add actions into Dispatcher queue by calling Dispatcher.InvokeAsync. You can also specify dispatcher priority, depending on your requirements.
Please note that moving thousands of items can't be reliably fast, so you may need to rethink the drawing logic. If even starting animation is slow, it's highly likely animating won't be fast enough too.
You can try use async/await modifier
async internal Task someStoryBoard(figure someFigure, double coordMoveToValue)
I have an application where performance-sensitive drawings occur using a WriteableBitmap. An event is called with CompositionTarget.Rendering to actually update the back buffer of the WriteableBitmap. From the MSDN documentation, that means the event is fired once per frame, right before the control is rendered.
The issue that I am having is that the WriteableBitmap's Lock() function takes an extremely long time, especially at larger bitmap sizes. I have previously read that AddDirtyRegion() has a bug that causes the entire bitmap to invalidate, leading to poor performance. However, that doesn't seem to be the case here. From a good bit of low-level checking, it seems that Lock() opens the bitmap's backbuffer for writing on the render thread, which means every time my event handler is called, it has to thread block until the render thread is ready for it. This leads to a noticeable lag when updating the graphics of the bitmap.
I have already tried adding a timeout to the event handler, using TryLock(), so that it won't block for such a long time and cause the performance degradation. This, however, causes a similar effect in that it appears to lag, because larger numbers of bitmap updates get lumped together.
Here is the relevant code from the event handler to show what exactly I am doing. The UpdatePixels() function was written to avoid using the potentially bugged AddDirtyRect():
void updateBitmap(object sender, EventArgs e)
{
if (!form.ResizingWindow)
{
// Lock and unlock are important... Make sure to keep them outside of the loop for performance reasons.
if (canvasUpdates.Count > 0)
{
//bool locked = scaledDrawingMap.TryLock(bitmapLockDuration);
scaledDrawingMap.Lock();
//if (locked)
//{
unsafe
{
int* pixData = (int*)scaledDrawingMap.BackBuffer;
foreach (Int32Rect i in canvasUpdates)
{
// The graphics object isn't directly shown, so this isn't actually necessary. We do a sort of manual copy from the drawingMap, which acts similarly
// to a back buffer.
Int32Rect temp = GetValidDirtyRegion(i);
UpdatePixels(temp, pixData);
}
scaledDrawingMap.Unlock();
canvasUpdates.Clear();
}
//}
}
}
}
private unsafe void UpdatePixels(Int32Rect temp, int* pixData)
{
//int* pixData = (int*)scaledDrawingMap.BackBuffer;
// Directly copy the backbuffer into a new buffer, to use WritePixels().
var stride = temp.Width * scaledDrawingMap.Format.BitsPerPixel / 8;
int[] relevantPixData = new int[stride * temp.Height];
int srcIdx = 0;
int pWidth = scaledDrawingMap.PixelWidth;
int yLess = temp.Y + temp.Height;
int xLess = temp.X + temp.Width;
for (int y = temp.Y; y < yLess; y++)
{
for (int x = temp.X; x < xLess; x++)
{
relevantPixData[srcIdx++] = pixData[y * pWidth + x];
}
}
scaledDrawingMap.WritePixels(temp, relevantPixData, stride, 0);
}
I can't seem to figure out how to avoid the issue of thread blocking with the WriteableBitmap, and I can't see any obvious faults in the code I have written. Any help or pointers would be much appreciated.
Looks like you are not actually using the BackBuffer to write - only to read.
WritePixels writes to the "front" buffer and does not require a lock.
I don't know if you have some other reason to lock it (other threads doing something), but for the code that's here i don't see why you would need to.
I guess I was wrong about not needing a lock to read from BackBuffer (*pixData) - I thought it was only for writes, but I am positive you do not need to to call Lock for WritePixels.
As far as I can tell, you are doing:
Lock the back buffer
Copy something from it to an array
Call WritePixels using this new array
Unlock the back buffer.
How about switching 3 and 4?
WritePixels may internally cause rendering thread (which has higher priority on the message queue) to get a lock on its behalf which is probably a factor in the delay you are seeing.