Safely clear List (with concurrency) - c#

Good day!
I have List<byte> soundBuffer - to get audio signal from microphone.
void _waveInStream_DataAvailable(object sender, WaveInEventArgs e)
{
lock(_lockObject)
{
for (int i = 0; i < e.BytesRecorded; i++)
{
_soundBuffer.Add(e.Buffer[i]);
}
}
}
And if user waits a long type- buffer will be very big (2 mb per minute).
So, i create a timer:
_timerSoundCount = new System.Timers.Timer();
_timerSoundCount.Interval = 10000; // check every 10 second
_timerSoundCount.Enabled = true;
_timerSoundCount.Elapsed += _timerSoundCount_Elapsed;
_timerSoundCount.Start();
And:
void _timerSoundCount_Elapsed(object sender, ElapsedEventArgs e)
{
if(_soundBuffer.Count>2*1024*1024)
{
var energy = GetSignalEnergy(_soundBuffer);
if(energy<1) //if energy of signal is small- clear buffer.
{
lock (_lockObject)
_soundBuffer.Clear();
}
}
else if(_soundBuffer.Count>=5*1024*1024)
{... the same operation}
else if(_sounfBuffer.Count>=10*1024*1024)
{ _soundBuffer.Clear();//very big buffer }
}
Every 10 seconds i check the buffer size. If it is too big- i just clear buffer, because i can detect Speech\Silence and clear buffer at that code.
So, point is: can it be that when i execute _soundBuffer.Clear() at timer and at the same time at _waveInStream_DataAvailable i will add new bytes to buffer- can it be currupt write?
Can it be a deadlock?
If so, can you help me how to safetely clear buffer?
Thank you!

If the two actions are being undertaken from the same thread, there is no chance of a deadlock occurring.
If there are multiple threads writing/reading the list at the same time, then lock should be used (https://msdn.microsoft.com/en-us/library/c5kehkcz.aspx) to prevent multiple threads accessing the object simultaneously. See here (use the same lock object at two different code block?) for a simple example.
Alternatively, you can use a concurrent collection from the System.Collections.Concurrent namespace (https://msdn.microsoft.com/en-us/library/system.collections.concurrent(v=vs.110).aspx) Perhaps a ConcurrentQueue would be appropriate if the data is not being randomly accessed. You can also implement your own concurrent collection, although this is much more complex.

Related

c# queue and deque at the same time

i am working on a project that requires read a qr code, queue it, as it is queue another task dequeues the data and send it to machine via socket messaging. this is all done in c#.
my question is how would i keep queue while another task is dequeing. i have the qr code working, i can queue and dequeue and the socket messaging portion working. but i don't know how to run the queue and dequeue at the same time.
i have looked into threading, specifically multi threading. i am more confused than before i started reading.
any help will be greatly appreciated.
EDIT: So based on you comments and doing a little bit of research, i started writing a bit of code. no matter what i did, the thread only runs once. it is supposed to keep running.
public partial class Form1 : Form
{
private BlockingCollection<string> _queue = new BlockingCollection<string>(30);
//private string item = "A";
//private int count = 0;
private Thread _th1;
public Form1()
{
InitializeComponent();
_th1 = new Thread(thread_example);
_th1.Start();
}
private void Form1_Load(object sender, EventArgs e)
{
}
private void thread_example()
{
if (_queue.Count > 0)
{
_queue.Take();
Console.WriteLine("Removed 1 Item from queue!");
}
else
{
Console.WriteLine("Queue Empty!");
}
Thread.Sleep(500);
}
private void btnProduce_Click(object sender, EventArgs e)
{
_queue.Add("test_string");
Console.WriteLine("Added 1 item to the queue");
}
}
I would highly recommend using BlockingCollection. The problem you have is called a Producer-Consumer problem
BlockingCollection provides an implementation that handles the Producer-Consumer problem.
Specifically, you have to also think about: What happens when the dequeuing thread gets slow and can't keep up with the scanning thread, say due to network slowness at that time?
The BlockingCollection will block the queuing thread to slow the whole process in sync, based on the BlockingCollection Capacity specified while constructing.
Also, you can get either FIFO or LIFO behavior, by using ConcurrentQueue or ConcurrentBag as the underlying storage. BlockingCollection just provides the "bounded-ness" properties on top of the underlying synchronized collections.

Use of global var in multithreaded camera frame ready events

I am writing an application that depends on fast image manipulation. It might sound strange but I'm doing this C# and not in C++. So far this has not been a limitation, I can process an image realtime. While I do quite some complex things with the image and I do this under 30ms.
I changed the program to make sure that the image stream would never queue
by simply checking a boolean to check if a current frame is not being processed. Normally this wouldn't happen, but in some cases it did. For example when run the app in VS2010 debug mode, or when the PC is doing also other heavy tasks, and has less CPU resources.
In such case I would like to skip new frame processing, so processing them won't queue. In such cases it would be better to just work with last known data which is still being processed, and thus waiting would be the fastest method then to retrieve an answer.
So I started with something like:
private void Camera_FrameReady(object Sender, ImageEvent e)
{
if (!IsImageReady) return; // global var
IsImageReady = false;
//... do stuff with the image
IsImageReady=true;
}
This didn't workout, as I had hoped. And I think it has to do with the threading nature of events within the C# compiler. So then I tried to resolve it by de-registering and re-registering the Camera_FrameReady ready, but the camera takes to much time to restart, so that didn't workout.
Strangely now it seams to work with the code below but I'm not sure why it does.
private void Camera_FrameReady(object Sender, ImageEvent e)
{
Update_Image(e)
}
private void Update_Image(e)
{
if (!IsImageReady) return; // global var
IsImageReady = false;
//... do stuff with the image
IsImageReady=true;
}
This makes me wonder about how C# gets compiled. Does it work like that whenever Camera_FrameReady is called it has a "world view" of the current global values? Or that global variables are only updated after the processing of an event?
The first thing came in my head is that the Camera_FrameReady event blocks the acquisition thread. But that doesn't explain why the second solution works..
So if you want to process the images parallel to the acquisition thread, you should create a new thread for processing.
For example: When there is a new image, check if the processing thread is busy. If the processing thread is busy, you shouldn't wait or queue (like you wanted) but just skip this image. If the processing thread is waiting for work, store the image in a 'global' variable, so the processing thread can access it and signal the processing thread.
I made an example for you: (pseudo)
// the thread for the processing.
private Thread _processingThread;
// a signal to check if the workerthread is busy with an image
private ManualResetEvent _workerThreadIsBusy = new ManualResetEvent(false);
// request for terminating
private ManualResetEvent _terminating = new ManualResetEvent(false);
// confirm terminated
private ManualResetEvent _terminated = new ManualResetEvent(false);
// store the current image.
private Image _myImage;
// event callback for new images
private void Camera_FrameReady(object Sender, ImageEvent e)
{
// is the workerthread already processing an image? return.. (skip this image)
if (_workerThreadIsBusy.WaitOne(0))
return; // skip frame.
//create a 'global' ref so the workerthread can access it.
/* BE CAREFULL HERE. You might experience trouble with the instance of this image.
* You are creating another reference to the SAME instance of the image
* to process on another thread. When the Camera is reusing this
* image (for speed), the image might screwed-up. In that case,
* you have to create a copy!
* (personally I would reuse the image which makes the image not available outside the event callback) */
_myImage = e.Image;
// signal the workerthread, so it starts processing the current image.
_workerThreadIsBusy.Set();
}
private void ImageProcessingThread()
{
var waitHandles = new WaitHandle[] { _terminating, _workerThreadIsBusy };
var run = true;
while (run)
{
switch (EventWaitHandle.WaitAny(waitHandles))
{
case 0:
// terminating.
run = false;
break;
case 1:
// process _myImage
ProcessImage(_myImage);
_workerThreadIsBusy.Reset();
break;
}
}
_terminated.Set();
}
private void ProcessImage(Image _myImage)
{
// whatever...
}
// constructor
public MyCameraProcessor()
{
// define the thread.
_processingThread = new Thread(ImageProcessingThread);
_processingThread.Start();
}
public void Dispose()
{
_terminating.Set();
_terminated.WaitOne();
}
}
Your code is not multithreading safe
if (!IsImageReady) return; // global var
IsImageReady = false;
//... do stuff with the image
IsImageReady=true;
2 threads can read IsImageReady at the same time, see that it is true and both set it then to false. You might also get problems if the processor is reading IsImageReady from cache and not from memory. You can avoid these kind of problems with the Interlocked class, which reads and changes the value in one operation. It also ensures that the cache doesn't cause problems.
private int IsImageReady= 0;
private void Camera_FrameReady(object Sender, ImageEvent e){
int wasImageReady = Interlocked.Exchange(ref IsImageReady, 1);
if (wasImageReady ==1) return;
//do something
IsImageReady= 0;
}
}
Although I am not sure if that is your only problem. You might have also others. To be sure, you have to debug your code properly, which is very difficult when it involves multithreading. Read my article Codeproject: Debugging multithreaded code in real time how you can do it.

parallel operations with audio stream c#

I record sound in c#(wpf) and when data from sound card is available it calls this event:
void myWaveIn_DataAvailable(object sender, WaveInEventArgs e)
{
for (int index = 0; index < e.BytesRecorded; index += 2)//Here I convert in a loop the stream into floating number samples
{
short sample = (short)((e.Buffer[index + 1] << 8) |
e.Buffer[index + 0]);
samples32Queue.Enqueue(sample/32768f);
}
//***Do some Processing with data inside Queue
}
As you can see I push every sample from the recorded buffer to a queue that is declared like that:
Queue<float> samples32Queue = new Queue<float>();
As you can see inside the event after the for loop I want to do some processing on the Queue. I worry that while processing the data, a new samples will come from the sound card and my processing will got lost.
What is the right approach to make that?
Is the processing that I call from the event is a static method/non-static?
Given the fact you can buffer the samples and process them later, consider using BlockingCollection. BlockingCollection is an excellent solution for producer-consumer pattern, which to my understanding, is your case. On one side you have myWaveIn_DataAvailable() method as a producer adding samples to the collection, and on the other end you have another consuming thread (it doesn't have to be another thread, though) collecting the samples and processing them. There are various ways the consumer can be implemented, and they are well documented in MSDN.
EDIT:
See the following example, I did not test it but it should give you a start point:
class ProducerConsumerExample
{
BlockingCollection<float> samples32Collection;
Thread consumer;
public ProducerConsumerExample()
{
samples32Collection = new BlockingCollection<float>();
consumer = new Thread(() => LaunchConsumer());
consumer.Start(); //you don't have to luanch the consumer here...
}
void Terminate() //Call this to terminate the consumer
{
consumer.Abort();
}
void myWaveIn_DataAvailable(object sender, WaveInEventArgs e)
{
for (int index = 0; index < e.BytesRecorded; index += 2)//Here I convert in a loop the stream into floating number samples
{
short sample = (short)((e.Buffer[index + 1] << 8) |
e.Buffer[index + 0]);
samples32Collection.Add(sample / 32768f);
}
}
void LaunchConsumer()
{
while (true /* insert your abort condition here*/)
{
try
{
var sample = samples32Collection.Take(); //this thread will wait here until the producer add new item(s) to the collection
Process(sample); //in the meanwhile, more samples could be added to the collection
//but they will not be processed until this thread is done with Process(sample)
}
catch (InvalidOperationException) { }
}
}
}

outofmemoryexception: threads using too much virtual memory, about 12GB

actually i have to create lots of threads to send pcap file using UDP protocol, when thread completely sends the pcap file it then sleep for some time. when i sleep thread to 420 seconds virtual memory gets full after creating more than 3100 threads and program throws a OutOfMemoryException.
i searched internet about this problem but found that a thread takes only 1MB to create and pcap file is just 60KB, and my 3100 threads are consuming more than 12GB(1.06*3100<12GB). on the other hand physical memory is not used more than 200MB. i have to create more than 5000 threads at the same time
what am i doing wrong? can anyone help me?
thanks
my code:
public static void send_pcap_file_with_single_port()
{
string callID = Call_ID;
try
{
//CREATING CONNECTION HERE
using (FileStream stream = new FileStream("g711a.pcap", FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite))
{
for (Pos = 0; Pos < (StreamBytes - ChunkSize); Pos += ChunkSize)
{
//creating RTP_header here
stream.Read(RTP_payload, 0, ChunkSize);
//combining both the byte arrays
System.Buffer.BlockCopy(RTP_header, 0, Bytes_to_send, 0, RTP_header.Length);
System.Buffer.BlockCopy(RTP_payload, 0, Bytes_to_send, 16, RTP_payload.Length);
RTPpacket_queue.Enqueue(Bytes_to_send);
//RTP_handler.Send(Bytes_to_send, Bytes_to_send.Length, remote_EP);
}
//done processing here
stream.Close();
stream.Dispose();
RTP_count++;
GC.Collect();
}
System.Threading.Thread.Sleep(420000);
}
catch (Exception e)
{
//using (StreamWriter sw = new StreamWriter(stream_logFile))
//{
// sw.WriteLine(e.ToString());
//}
//send_BYE_message_toSIPp(client, "BYE", 5060, 2, callID);
Console.WriteLine(e.ToString());
}
}
creating threads here:
Thread RTP_sender = new Thread(new ThreadStart(send_pcap_file_with_single_port));
RTP_sender.Start();
In simple terms you exhaust you garbage collector by creating objects in long term pile (objects that survive longer then few seconds). Fix would be to free and recreate thread when it is needed.
In any case by default i5 has 2 cores, if you have 3 or more threads than they are running them on same cpu. Running 3000+ of them means 1500 each, it is not a problem unless they try to write in same place (in case they start blocking like hell).
To demonstrate you do not need 5000 permanent threads to accomplish something like this, I have created a sample program.
The program does not do much really but what it does do is that it creates 5000 objects, each of which creates a thread when it needs to do its work. There isn't any actual work being done other than simply sleeping for a random interval but still.
Just run the program, leave it running for a while and keep an eye on its memory use. You will see it is very much manageable while still actually doing work on 5000 objects.
You will probably need to actually be creative in applying this approach to your situation but you could do something along the lines of what I am doing.
namespace Test
{
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Diagnostics;
using System.Threading;
public static class MainClass
{
public static Random sleeper = new Random ();
public static void Main (string[] args)
{
Stopwatch timer = new Stopwatch ();
List<WorkerClass> workload = new List<WorkerClass> ();
// Create a workload of 5000 objects
for (int i = 0; i < 5000; i++) {
workload.Add (new WorkerClass ());
}
int fires = 0;
// Start processing the workload
while (true) {
// We'll measure the time it took to go through the entire workload
// to illustrate that it does not take all that long.
timer.Restart ();
foreach (WorkerClass w in workload) {
// for each of the worker objects in the entire workload
// we decrease its internal counter by 1.
// Because after the loop is done, we sleep for 1 secondd
// that amounts to reducing the counter by 1 every second.
w.counter--;
if (w.counter == 0) {
fires++;
// Once the counter hits 0, do the work.
w.DoWork ();
}
}
timer.Stop ();
Console.WriteLine ("Processing the entire workload of {0} objects took {1} milliseconds, {2} workers actually fired.", workload.Count, timer.ElapsedMilliseconds, fires);
fires = 0;
Thread.Sleep (1000);
}
}
}
public class WorkerClass
{
public int counter = 0;
public WorkerClass ()
{
// When the worker is created, set its internal counter
// to a random value between 5 and 10.
// This is to mimic sleeping it for a random interval.
// Also see the primary loop in MainClass.Main
this.counter = MainClass.sleeper.Next (5, 10);
}
public void DoWork ()
{
// Whenever we do the work, we'll create a background worker thread
// that actually does the work.
BackgroundWorker work = new BackgroundWorker ();
work.RunWorkerCompleted += (object sender, RunWorkerCompletedEventArgs e) => {
// This simulates going back to sleep for a random interval, see
// the main loop in MainClass.Main
this.counter = MainClass.sleeper.Next (5, 10);
};
work.DoWork += (object sender, DoWorkEventArgs e) => {
// Simulate working by sleeping a random interval
Thread.Sleep (MainClass.sleeper.Next (2000, 5000));
};
// And now we actually do the work.
work.RunWorkerAsync ();
}
}
}

stopwatch c# behave differently in different threads?

I'm currently using a stopwatch as a global timer. I have main thread running, another thread, and an event method.
The main thread launches the other thread and the event method is triggered by events. Both methods will call the stopwatch and get its time. The thing is, the times are not consistent:
from main thread:
START REC AT 9282
STOp REC AT 19290
from another thread:
audio 1
audio 304
audio 354
audio 404
audio 444
audio 494
audio 544
audio 594
from event method:
video 4
video 5
video 29
video 61
video 97
video 129
video 161
I don't get why if i start my rec at 9282, the other two functions that call the stopwatch will have timers that start at zero? Is this a thread related issue? How can i fix this? Thanks
UPDATE:*********
when i save my frames i changed to:
long a = relogio.EllapseMilliseconds
i print out this value and its ok, as expected. but when i print the value stored in the lists, they come as starting from the beggining. strange huh?
SORRY FOR ALL THE TROUBLE, I PRINTED IT WITHOUT THE STARTING TIME,THATS WHY THEY ALL SEEMED TO START FROM ZERO! MANY THANKS AND SORRY!
main thread
private void Start_Recording_Click(object sender, RoutedEventArgs e)
{
rec_starting_time = relogio.ElapsedMilliseconds;
Console.WriteLine("START REC AT " + rec_starting_time);
write_stream.enableRecording();
Thread a = new Thread(scheduleAudioVideoFramePicks);
a.Start();
scheduleAudioVideoFramePicks - this thread just counts the time, so i know when to stop
//while....
if (rec_starting_time + time_Actual > rec_starting_time+recording_time * 1000)//1000 - 1s = 1000ms
{
totalRecordingTimeElapsed = true;
write_stream.disableRecording();
Console.WriteLine("STOp REC AT " + relogio.ElapsedMilliseconds);
}
//end while
lock (list_audio)
{
int b = 0;
//print time of frames gathered
foreach(AudioFrame item in list_audio){
Console.WriteLine("audio " + (item.getTime() - rec_starting_time));
}
lock (list_video)
{
}
foreach (VideoFrame item in list_video)
{
Console.WriteLine("video " + (item.getTime() - rec_starting_time));
}
}
the another thread, where i get the time
if (write_stream.isRecording())
{
list_audio.Enqueue(new AudioFrame(relogio.ElapsedMilliseconds, audioBuffer));
}
event method
if (write_stream.isRecording())
{
list_video.Add(new VideoFrame(relogio.ElapsedMilliseconds, this.colorPixels));
}~
i dont know if this is relevant, but i start my stopwatch like this
public MainWindow()
{
InitializeComponent();
//some code
this.relogio = new Stopwatch();
relogio.Start();
}
Stopwatch is not threadsafe, particularly for 32-bit programs.
It uses the Windows API call QueryPerformanceCounter() to update a private long field. On 32-bit systems you could get a "torn read" when one thread reads the long value while another thread is updating it.
To fix that, you'd have to put a lock around access to the Stopwatch.
Also note that one some older systems there were bugs where inconsistent values could be returned from different threads calling QueryPerformanceCounter(). From the documentation:
On a multiprocessor computer, it should not matter which processor is called. However, you can get different results on different processors due to bugs in the basic input/output system (BIOS) or the hardware abstraction layer (HAL). To specify processor affinity for a thread, use the SetThreadAffinityMask function.
I have never encountered this bug myself, and I don't think it's very common.
What results do you get with the following test program? The times should be mostly increasing in value, but you are liable to get one or two out of order just because their threads get rescheduled just after they've read a value and before they add it to the queue.
namespace Demo
{
class Program
{
Stopwatch sw = Stopwatch.StartNew();
object locker = new object();
ConcurrentQueue<long> queue = new ConcurrentQueue<long>();
Barrier barrier = new Barrier(9);
void run()
{
Console.WriteLine("Starting");
for (int i = 0; i < 8; ++i)
Task.Run(()=>test());
barrier.SignalAndWait(); // Make sure all threads start "simultaneously"
Thread.Sleep(2000); // Plenty of time for all the threads to finish.
Console.WriteLine("Stopped");
foreach (var elapsed in queue)
Console.WriteLine(elapsed);
Console.ReadLine();
}
void test()
{
barrier.SignalAndWait(); // Make sure all threads start "simultaneously".
for (int i = 0; i < 10; ++i)
queue.Enqueue(elapsed());
}
long elapsed()
{
lock (locker)
{
return sw.ElapsedTicks;
}
}
static void Main()
{
new Program().run();
}
}
}
Having said all that, the most obvious answer is that in fact you aren't sharing a single Stopwatch between the threads, but instead you have accidentally started a new one for each thread...

Categories