How to manage a changing List while performing enumeration? - c#

I have a list of objects (musical notes) that is enumerated on a separate thread as they are played. I am doing this so that I can keep the UI thread responsive.
whilst a note is playing (as part of an enumeration) how can I allow for the fact that a new note may of been added to the List (without the obvious collection modified exception).
I know I could copy the list to a temporary list and enumerate that, but I actually want the list to grow as a user selects more (and this will happen whilst the first note is playing etc).
psuedo logic as is:
onClick()
{
Queue.Add(theClickedNote)
Queue.Play() <-- on another thread
}
Play()
{
if(Playing==true){return ;}
foreach(note theNote in Queue)
{
Note.Play();
Queue.Remove(theNote);
}
}
As you can see in the above, each Click event adds a note to the Queue and then invokes a play method on the queue.
the queue enumerates the notes and plays each one in turn before removing the note
I hope I have explained what I am trying to do clearly?

Something like this can be used with ConcurrentQueue<T> in .Net 4.
ConcurrentQueue<Note> Queue = new ConcurrentQueue<Note>();
void onClick()
{
Queue.Enqueue(theClickedNote);
// start Play on another thread if necessary
}
void Play()
{
if (Playing) return;
Note note;
while(Queue.TryDequeue(out note))
{
note.Play();
}
}
ConcurrentQueue is thread-safe, so no locking needs to be implemented.

Instead of using a List you should use a real Queue
Then your code will look like this:
Queue<Note> queue = new Queue<Note>();
void onClick()
{
queue.Enqueue(note);
}
void Play()
{
if (Playing == true) { return; }
while (queue.Peek() != null)
{
var note = queue.Dequeue();
note.play();
}
}
This code isn't thread safe so you should add locks on the queue but this is the general idea.

As suggested by mike z, use the ConcurrentQueue added in .NET 4.0
Along with the other concurrent collections, this queue allows you to add / remove items asynchronosly + work with a snapshot of the underlying collection, by using the GetEnumerator method and iterating with it.
Notice you still might need to deal with different situations, such as queue being empty, this can be solved via the BlockingCollection, which take method will block the thread as long as the collection is empty

Related

How should multiple threads access a list when each list element must be locked?

I have a list of "Module" classes, List<Module> modules. These modules each contain their own public object to use as a lock when accessing data. Let's say I have a couple threads which perform processing on these modules at random times. Currently I have each thread perform the processing on the modules in order, like so:
foreach (Module module in modules)
{
lock (module.Locker)
{
//Do stuff
}
}
This has worked fine so far, but I have the feeling there's a lot of unnecessary waiting. For instance, if two threads start one right after another, but the first is performing heavy processing and the second one isn't, the second one will have to wait on every module while the first one is doing its processing.
This is the question then: Is there a "proper" or "most efficient" way to lock on elements in a list? I was going to do this:
foreach (Module module in modules.Randomize())
{
lock (module.Locker)
{
//Do stuff
}
}
Where "Randomize()" is just an extension method that returns the elements of the list in a random order. However, I was wondering if there's an even better way than random?
Assuming that work inside the lock is huge and has heavy contention. I'm introducing additional overhead of creating new List<T> and removing items from them.
public void ProcessModules(List<Module> modules)
{
List<Module> myModules = new List<Module>(modules);//Take a copy of the list
int index = myModules.Count - 1;
while (myModules.Count > 0)
{
if (index < 0)
{
index = myModules.Count - 1;
}
Module module = myModules[index];
if (!Monitor.TryEnter(module.Locker))
{
index--;
continue;
}
try
{
//Do processing module
}
finally
{
Monitor.Exit(module.Locker);
myModules.RemoveAt(index);
index--;
}
}
}
What this method does is takes the copy of the modules passed in, then tries to acquire the lock, if not possible to acquire it(because another thread owns it), it skips and moves on. After finishing the list, it comes again to see whether another thread has released the lock, if not again skips it and moves on. This cycle continues till we process all the modules in the list.
This way, we're not waiting for any contended locks, we just keep on processing the modules that's not locked by another thread.
lock stands for Monitor.Enter, you can use Monitor.TryEnter to check if lock is already acquired and somehow skip this element and try to take another.
There will be overhead if multiple threads are processing same ordered list of items, so idea with Randomize seems a good one (unless reordering is expensive compared to processing itself, or list can be changed while processing, etc).
Totally other possibility is to prepare queues (from list) for each thread in a way what there will be no cross-waiting (or waiting will be minimized). Combined with Monitor.TryEnter this should be an ultimate solution. Unfortunately, I have no clue in how to prepare such queues, nor how to skip processing queue item, leaving that for you =P.
Here is a snippet of what I mean:
foreach(var item in list)
if(!item.Processed && Monitor.TryEnter(item.Locker))
try
{
... // do job
item.Processed = true;
}
finally
{
Monitor.Exit(item.Locker))
}
Not sure I entirely follow, however from what I can tell your goals is to periodically do stuff to each module, and you want to use multiple threads because the stuff is time consuming. If this is the case I would have a single thread periodically check all modules and have that thread use the TPL to spread the workload, like so:
Parallel.ForEach(modules, module =>
{
lock(module.Locker)
{
}
});
As an aside, the guidance on locks is that the object that you lock on should be private, so I'd probably change to doing something like this:
Parallel.ForEach(modules, module => module.DoStuff());
// In the module implementation
private readonly object _lock = new object();
public void DoStuff()
{
lock (this._lock)
{
// Do stuff here
}
}
I.e. each module should be thread-safe and responsible for its own locking.

BlockingCollection Max Size

I understand that BlockingCollection using ConcurrentQueue has a boundedcapacity of 100.
However I'm unsure as to what that means.
I'm trying to achieve a concurrent cache which, can dequeue, can deque/enque in one operation if the queue size is too large, (i.e. loose messages when the cache overflows). Is there a way to use boundedcapacity for this or is it better to manually do this or create a new collection.
Basically I have a reading thread and several writing threads. I would like it if the data in the queue is the "freshest" of all the writers.
A bounded capacity of N means that if the queue already contains N items, any thread attempting to add another item will block until a different thread removes an item.
What you seem to want is a different concept - you want most recently added item to be the first item that is dequeued by the consuming thread.
You can achieve that by using a ConcurrentStack rather than a ConcurrentQueue for the underlying store.
You would use this constructor and pass in a ConcurrentStack.
For example:
var blockingCollection = new BlockingCollection<int>(new ConcurrentStack<int>());
By using ConcurrentStack, you ensure that each item that the consuming thread dequeues will be the freshest item in the queue at that time.
Also note that if you specify an upper bound for the blocking collection, you can use BlockingCollection.TryAdd() which will return false if the collection was full at the time you called it.
It sounds to me like you're trying to build something like an MRU (most recently used) cache. BlockingCollection is not the best way to do that.
I would suggest instead that you use a LinkedList. It's not thread-safe, so you'll have to provide your own synchronization, but that's not too tough. Your enqueue method looks like this:
LinkedList<MyType> TheQueue = new LinkedList<MyType>();
object listLock = new object();
void Enqueue(MyType item)
{
lock (listLock)
{
TheQueue.AddFirst(item);
while (TheQueue.Count > MaxQueueSize)
{
// Queue overflow. Reduce to max size.
TheQueue.RemoveLast();
}
}
}
And dequeue is even easier:
MyType Dequeue()
{
lock (listLock)
{
return (TheQueue.Count > 0) ? TheQueue.RemoveLast() : null;
}
}
It's a little more involved if you want the consumers to do non-busy waits on the queue. You can do it with Monitor.Wait and Monitor.Pulse. See the example on the Monitor.Pulse page for an example.
Update:
It occurs to me that you could do the same thing with a circular buffer (an array). Just maintain head and tail pointers. You insert at head and remove at tail. If you go to insert, and head == tail, then you need to increment tail, which effectively removes the previous tail item.
If you want a custom BlockingCollection that holds the N most recent elements, and drops the oldest elements when it's full, you could create one quite easily based on a Channel<T>. The Channels are intended to be used in asynchronous scenarios, but making them block the consumer is trivial and should not cause any unwanted side-effects (like deadlocks), even if used in an environment with a SynchronizationContext installed.
public class MostRecentBlockingCollection<T>
{
private readonly Channel<T> _channel;
public MostRecentBlockingCollection(int capacity)
{
_channel = Channel.CreateBounded<T>(new BoundedChannelOptions(capacity)
{
FullMode = BoundedChannelFullMode.DropOldest,
});
}
public bool IsCompleted => _channel.Reader.Completion.IsCompleted;
public void Add(T item)
=> _channel.Writer.WriteAsync(item).AsTask().GetAwaiter().GetResult();
public T Take()
=> _channel.Reader.ReadAsync().AsTask().GetAwaiter().GetResult();
public void CompleteAdding() => _channel.Writer.Complete();
public IEnumerable<T> GetConsumingEnumerable()
{
while (_channel.Reader.WaitToReadAsync().AsTask().GetAwaiter().GetResult())
while (_channel.Reader.TryRead(out var item))
yield return item;
}
}
The MostRecentBlockingCollection class blocks only the consumer. The producer can always add items in the collection, causing (potentially) some previously added elements to be dropped.
Adding cancellation support should be straightforward, since the Channel<T> API already supports it. Adding support for timeout is less trivial, but shouldn't be very difficult to do.

Is this use of the generic List thread safe

I have a System.Collections.Generic.List<T> to which I only ever add items in a timer callback. The timer is restarted only after the operation completes.
I have a System.Collections.Concurrent.ConcurrentQueue<T> which stores indices of added items in the list above. This store operation is also always performed in the same timer callback described above.
Is a read operation that iterates the queue and accesses the corresponding items in the list thread safe?
Sample code:
private List<Object> items;
private ConcurrentQueue<int> queue;
private Timer timer;
private void callback(object state)
{
int index = items.Count;
items.Add(new object());
if (true)//some condition here
queue.Enqueue(index);
timer.Change(TimeSpan.FromMilliseconds(500), TimeSpan.FromMilliseconds(-1));
}
//This can be called from any thread
public IEnumerable<object> AccessItems()
{
foreach (var index in queue)
{
yield return items[index];
}
}
My understanding:
Even if the list is resized when it is being indexed, I am only accessing an item that already exists, so it does not matter whether it is read from the old array or the new array. Hence this should be thread-safe.
Is a read operation that iterates the queue and accesses the corresponding items in the list thread safe?
Is it documented as being thread safe?
If no, then it is foolish to treat it as thread safe, even if it is in this implementation by accident. Thread safety should be by design.
Sharing memory across threads is a bad idea in the first place; if you don't do it then you don't have to ask whether the operation is thread safe.
If you have to do it then use a collection designed for shared memory access.
If you can't do that then use a lock. Locks are cheap if uncontended.
If you have a performance problem because your locks are contended all the time then fix that problem by changing your threading architecture rather than trying to do dangerous and foolish things like low-lock code. No one writes low-lock code correctly except for a handful of experts. (I am not one of them; I don't write low-lock code either.)
Even if the list is resized when it is being indexed, I am only accessing an item that already exists, so it does not matter whether it is read from the old array or the new array.
That's the wrong way to think about it. The right way to think about it is:
If the list is resized then the list's internal data structures are being mutated. It is possible that the internal data structure is mutated into an inconsistent form halfway through the mutation, that will be made consistent by the time the mutation is finished. Therefore my reader can see this inconsistent state from another thread, which makes the behaviour of my entire program unpredictable. It could crash, it could go into an infinite loop, it could corrupt other data structures, I don't know, because I'm running code that assumes a consistent state in a world with inconsistent state.
Big edit
The ConcurrentQueue is only safe with regard to the Enqueue(T) and T Dequeue() operations.
You're doing a foreach on it and that doesn't get synchronized at the required level.
The biggest problem in your particular case is the fact the enumerating of the Queue (which is a Collection in it's own right) might throw the wellknown "Collection has been modified" exception. Why is that the biggest problem ? Because you are adding things to the queue after you've added the corresponding objects to the list (there's also a great need for the List to be synchronized but that + the biggest problem get solved with just one "bullet"). While enumerating a collection it is not easy to swallow the fact that another thread is modifying it (even if on a microscopic level the modification is a safe - ConcurrentQueue does just that).
Therefore you absolutely need synchronize the access to the queues (and the central List while you're at it) using another means of synchronization (and by that I mean you can also forget abount ConcurrentQueue and use a simple Queue or even a List since you never Dequeue things).
So just do something like:
public void Writer(object toWrite) {
this.rwLock.EnterWriteLock();
try {
int tailIndex = this.list.Count;
this.list.Add(toWrite);
if (..condition1..)
this.queue1.Enqueue(tailIndex);
if (..condition2..)
this.queue2.Enqueue(tailIndex);
if (..condition3..)
this.queue3.Enqueue(tailIndex);
..etc..
} finally {
this.rwLock.ExitWriteLock();
}
}
and in the AccessItems:
public IEnumerable<object> AccessItems(int queueIndex) {
Queue<object> whichQueue = null;
switch (queueIndex) {
case 1: whichQueue = this.queue1; break;
case 2: whichQueue = this.queue2; break;
case 3: whichQueue = this.queue3; break;
..etc..
default: throw new NotSupportedException("Invalid queue disambiguating params");
}
List<object> results = new List<object>();
this.rwLock.EnterReadLock();
try {
foreach (var index in whichQueue)
results.Add(this.list[index]);
} finally {
this.rwLock.ExitReadLock();
}
return results;
}
And, based on my entire understanding of the cases in which your app accesses the List and the various Queues, it should be 100% safe.
End of big edit
First of all: What is this thing you call Thread-Safe ? by Eric Lippert
In your particular case, I guess the answer is no.
It is not the case that inconsistencies might arrise in the global context (the actual list).
Instead it is possible that the actual readers (who might very well "collide" with the unique writer) end up with inconsistencies in themselves (their very own Stacks meaning: local variables of all methods, parameters and also their logically isolated portion of the heap)).
The possibility of such "per-Thread" inconsistencies (the Nth thread wants to learn the number of elements in the List and finds out that value is 39404999 although in reality you only added 3 values) is enough to declare that, generally speaking that architecture is not thread-safe ( although you don't actually change the globally accessible List, simply by reading it in a flawed manner ).
I suggest you use the ReaderWriterLockSlim class.
I think you will find it fits your needs:
private ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);
private List<Object> items;
private ConcurrentQueue<int> queue;
private Timer timer;
private void callback(object state)
{
int index = items.Count;
this.rwLock.EnterWriteLock();
try {
// in this place, right here, there can be only ONE writer
// and while the writer is between EnterWriteLock and ExitWriteLock
// there can exist no readers in the following method (between EnterReadLock
// and ExitReadLock)
// we add the item to the List
// AND do the enqueue "atomically" (as loose a term as thread-safe)
items.Add(new object());
if (true)//some condition here
queue.Enqueue(index);
} finally {
this.rwLock.ExitWriteLock();
}
timer.Change(TimeSpan.FromMilliseconds(500), TimeSpan.FromMilliseconds(-1));
}
//This can be called from any thread
public IEnumerable<object> AccessItems()
{
List<object> results = new List<object>();
this.rwLock.EnterReadLock();
try {
// in this place there can exist a thousand readers
// (doing these actions right here, between EnterReadLock and ExitReadLock)
// all at the same time, but NO writers
foreach (var index in queue)
{
this.results.Add ( this.items[index] );
}
} finally {
this.rwLock.ExitReadLock();
}
return results; // or foreach yield return you like that more :)
}
No because you are reading and writing to/from the same object concurrently. This is not documented to be safe so you can't be sure it is safe. Don't do it.
The fact that it is in fact unsafe as of .NET 4.0 means nothing, btw. Even if it was safe according to Reflector it could change anytime. You can't rely on the current version to predict future versions.
Don't try to get away with tricks like this. Why not just do it in an obviously safe way?
As a side note: Two timer callbacks can execute at the same time, so your code is doubly broken (multiple writers). Don't try to pull off tricks with threads.
It is thread-safish. The foreach statement uses the ConcurrentQueue.GetEnumerator() method. Which promises:
The enumeration represents a moment-in-time snapshot of the contents of the queue. It does not reflect any updates to the collection after GetEnumerator was called. The enumerator is safe to use concurrently with reads from and writes to the queue.
Which is another way of saying that your program isn't going to blow up randomly with an inscrutable exception message like the kind you'll get when you use the Queue class. Beware of the consequences though, implicit in this guarantee is that you may well be looking at a stale version of the queue. Your loop will not be able to see any elements that were added by another thread after your loop started executing. That kind of magic doesn't exist and is impossible to implement in a consistent way. Whether or not that makes your program misbehave is something you will have to think about and can't be guessed from the question. It is pretty rare that you can completely ignore it.
Your usage of the List<> is however utterly unsafe.

Make a linked list thread safe

I know this has been asked before (and I will keep researching), but I need to know how to make a particular linked list function in a thread safe manner. My current issue is that I have one thread that loops through all elements in a linked list, and another may add more elements to the end of this list. Sometimes it happens that the one thread tries to add another element to the list while the first is busy iterating through it (which causes an exception).
I was thinking of just adding a variable (boolean flag) to say that the list is currently busy being iterated through, but then how do I check it and wait with the second thread (it is ok if it waits, as the first thread runs pretty quickly). The only way I can think of doing this is through the use of a while loop constantly checking this busy flag. I realized this was a very dumb idea as it would cause the CPU to work hard while doing nothing useful. And now I am here to ask for a better insight. I have read about locks and so on, but it does not seem to be relevant in my case, but perhaps I am wrong?
In the meanwhile I'll keep searching the internet and post back if I find a solution.
EDIT:
Let me know if I should post some code to clear things up, but I'll try and explain it more clearly.
So I have a class with a linked list in it that contains elements that require processing. I have one thread that iterates through this list through a function call (let's call it "processElements"). I have a second thread that adds elements to process in a non-deterministic manner. However, sometimes it happens that it tries to call this addElement function while the processElements is running. This means that the an element is being added to the linked list while it is being iterated through by the first thread. This is not possible and causes an exception. Hope this clears it up.
I need the thread that adds new elements to yield until the processElements method is done executing.
To anyone stumbling on this problem. The accepted answer will give you a quick, an easy solution, but check out Brian Gideon's answer below for a more comprehensive answer, which will definitely give you more insight!
The exception is likely the result of having the collection changed in the middle of an iteration via IEnumerator. There are few techniques you can use to maintain thread-safety. I will present them in order of difficultly.
Lock Everything
This is by far the easiest and most trivial method for getting access to the data structure thread-safe. This pattern works well when the number of read and write operations are equally matched.
LinkedList<object> collection = new LinkedList<object>();
void Write()
{
lock (collection)
{
collection.AddLast(GetSomeObject());
}
}
void Read()
{
lock (collection)
{
foreach (object item in collection)
{
DoSomething(item);
}
}
}
Copy-Read Pattern
This is a slightly more complex pattern. You will notice that a copy of the data structure is made prior to reading it. This pattern works well when the number of read operations are few compared to the number of writes and the penalty of the copy is relatively small.
LinkedList<object> collection = new LinkedList<object>();
void Write()
{
lock (collection)
{
collection.AddLast(GetSomeObject());
}
}
void Read()
{
LinkedList<object> copy;
lock (collection)
{
copy = new LinkedList<object>(collection);
}
foreach (object item in copy)
{
DoSomething(item);
}
}
Copy-Modify-Swap Pattern
And finally we have the most complex and error prone pattern. I actually do not recommend using this pattern unless you really know what you are doing. Any deviation from what I have below could lead to problems. It is easy to mess this one up. In fact, I have inadvertently screwed this one up as well in the past. You will notice that a copy of the data structure is made prior to all modifications. The copy is then modified and finally the original reference is swapped out with the new instance. Basically we are always treating collection as if it were immutable. This pattern works well when the number of write operations are few compared to the number of reads and the penalty of the copy is relatively small.
object lockobj = new object();
volatile LinkedList<object> collection = new LinkedList<object>();
void Write()
{
lock (lockobj)
{
var copy = new LinkedList<object>(collection);
copy.AddLast(GetSomeObject());
collection = copy;
}
}
void Read()
{
LinkedList<object> local = collection;
foreach (object item in local)
{
DoSomething(item);
}
}
Update:
So I posed two questions in the comment section:
Why lock(lockobj) instead of lock(collection) on the write side?
Why local = collection on the read side?
Concerning the first question consider how the C# compiler will expand the lock.
void Write()
{
bool acquired = false;
object temp = lockobj;
try
{
Monitor.Enter(temp, ref acquired);
var copy = new LinkedList<object>(collection);
copy.AddLast(GetSomeObject());
collection = copy;
}
finally
{
if (acquired) Monitor.Exit(temp);
}
}
Now hopefully it is easier to see what can go wrong if we used collection as the lock expression.
Thread A executes object temp = collection.
Thread B executes collection = copy.
Thread C executes object temp = collection.
Thread A acquires the lock with the original reference.
Thread C acquires the lock with the new reference.
Clearly this would be disasterous! Writes would get lost since the critical section is entered more than once.
Now the second question was a little tricky. You do not necessarily have to do this with the code I posted above. But, that is because I used the collection only once. Now consider the following code.
void Read()
{
object x = collection.Last;
// The collection may get swapped out right here.
object y = collection.Last;
if (x != y)
{
Console.WriteLine("It could happen!");
}
}
The problem here is that collection could get swapped out at anytime. This would be an incredibly difficult bug to find. This is why I always extract a local reference on the read side when doing this pattern. That ensure we are using the same collection on each read operation.
Again, because problems like these are so subtle I do not recommend using this pattern unless you really need to.
Here’s a quick example of how to use locks to synchronize your access to the list:
private readonly IList<string> elements = new List<string>();
public void ProcessElements()
{
lock (this.elements)
{
foreach (string element in this.elements)
ProcessElement(element);
}
}
public void AddElement(string newElement)
{
lock (this.elements)
{
this.elements.Add(element);
}
}
A lock(o) statement means that the executing thread should acquire a mutual-exclusion lock on the object o, execute the statement block, and finally release the lock on o. If another thread attempts to acquire a lock on o concurrently (either for the same code block or for any other), then it will block (wait) until the lock is released.
Thus, the crucial point is that you use the same object for all the lock statements that you want to synchronize. The actual object you use may be arbitrary, as long as it is consistent. In the example above, we’re declared our collection to be readonly, so we can safely use it as our lock. However, if this were not the case, you should lock on another object:
private IList<string> elements = new List<string>();
private readonly object syncLock = new object();
public void ProcessElements()
{
lock (this.syncLock)
{
foreach (string element in this.elements)
ProcessElement(element);
}
}
public void AddElement(string newElement)
{
lock (this.syncLock)
{
this.elements.Add(element);
}
}

How to add a ListItem into a list while iterating (C#)?

I have a small application that using BackgroundWorker to process the IEnumerator<T> list at all time.
The code basically like this:
while(true){
foreach(T item in list){
// Process each item and send process
// Add an object in child List ( List<T1> item.Result )
}
Thread.Sleep(500);
}
Now I have a button and a textbox, which will add directly to the IEnumerator.
The problem is that after I add button, the backgroundworker continue to process it's currently processing Item, but will stop after finish that item. It does not continue.
How can I safely add an Item to the list without affect the backgroundworker? Beside the background worker also do adding objects to the item. What should be the solution for this?
Thank you
Have the background worker iterate over a copy of the original list, not the list itself.
while (true)
{
foreach (T item in new List<T>( list ))
{
....
}
Thread.Sleep(500);
}
If you attempt to modify a collection while enumerating over it, the enumerator will throw an exception. From the docs:
An enumerator remains valid as long as the collection remains
unchanged. If changes are made to the collection, such as adding,
modifying, or deleting elements, the enumerator is irrecoverably
invalidated and the next call to MoveNext or Reset throws an
InvalidOperationException. If the collection is modified between
MoveNext and Current, Current returns the element that it is set to,
even if the enumerator is already invalidated.
You should learn the basics about multi-threaded programming first, so, here it goes.
Try something along this:
// shared queue
ConcurrentQueue<T> queue = new ConcurrentQueue<T>();
// shared wait handle
AutoResetEvent autoEvent = new AutoResetEvent();
The queue is better here than the list, because it allows you to add and remove elements from it without worrying about the index of the current element - you just Enqueue() items on the other and, and Dequeue() them on the other. Use a class from System.Collections.Concurrent namespace, which handles thread-safe access for you automatically (and, due to complicated reasons you might want to read on later, is faster than a simple lock()).
Now, the foreground thread:
// schedule the work
queue.Enqueue(itemOfWork);
// and wake up our worker
autoEvent.Set();
The sprinkly-magical part here is the Set() invoked on our WaitHandle (yes, AutoResetEvent is an implementation of a WaitHandle). It wakes up a single thread that has been waiting for the synchronization event to fire, without using such ugly constructs as Thread.Sleep(). A call to Sleep() is almost always a sign of a mistake in multithreaded code!
Ok, for the last part - worker thread. Not many changes here:
while(true)
{
// wait for the signal
autoEvent.WaitOne();
T item;
// grab the work item
queue.TryDequeue(out item);
// handle the item here;
}
You probably need to use the "lock" keyword to prevent simultaniously accessing the shared "list" variable from both places of code.
http://msdn.microsoft.com/en-us/library/c5kehkcz(v=vs.71).aspx

Categories