Atomically taking everything from a ConcurrentQueue - c#

I have multiple threads generating items and sticking them in a common ConcurrentQueue:
private ConcurrentQueue<GeneratedItem> queuedItems = new ConcurrentQueue<GeneratedItem>();
private void BunchOfThreads () {
// ...
queuedItems.Enqueue(new GeneratedItem(...));
// ...
}
I have another single consumer thread but the way it needs to work in the context of this application is, occasionally, it just needs to grab everything currently in the threads' queue, removing it from that queue, all in one shot. Something like:
private Queue<GeneratedItem> GetAllNewItems () {
return queuedItems.TakeEverything(); // <-- not a real method
}
I think I looked through all the documentation (for the collection and its implemented interfaces) but I didn't seem to find anything like a "concurrently take all objects from queue", or even "concurrently swap contents with another queue".
I could do this no problem if I ditch the ConcurrentQueue and just protect a normal Queue with a lock, like this:
private Queue<GeneratedItem> queuedItems = new Queue<GeneratedItem>();
private void BunchOfThreads () {
// ...
lock (queuedItems) {
queuedItems.Enqueue(new GeneratedItem(...));
}
// ...
}
private Queue<GeneratedItem> GetAllNewItems () {
lock (queuedItems) {
Queue<GeneratedItem> newItems = new Queue<Event>(queuedItems);
queuedItems.Clear();
return newItems;
}
}
But, I like the convenience of the ConcurrentQueue and also since I'm just learning C# I'm curious about the API; so my question is, is there a way to do this with one of the concurrent collections?
Is there perhaps some way to access whatever synchronization object ConcurrentQueue uses and lock it for myself for my own purposes so that everything plays nicely together? Then I can lock it, take everything, and release?

It depends what you want to do. As per the comments in the source code
//number of snapshot takers, GetEnumerator(), ToList() and ToArray() operations take snapshot.
This works by internally calling ToList() which in turn works on m_numSnapshotTakers and a spin mechanism
/// Copies the <see cref="ConcurrentQueue{T}"/> elements to a new <see
/// cref="T:System.Collections.Generic.List{T}"/>.
/// </summary>
/// <returns>A new <see cref="T:System.Collections.Generic.List{T}"/> containing a snapshot of
/// elements copied from the <see cref="ConcurrentQueue{T}"/>.</returns>
private List<T> ToList()
{
// Increments the number of active snapshot takers. This increment must happen before the snapshot is
// taken. At the same time, Decrement must happen after list copying is over. Only in this way, can it
// eliminate race condition when Segment.TryRemove() checks whether m_numSnapshotTakers == 0.
Interlocked.Increment(ref m_numSnapshotTakers);
List<T> list = new List<T>();
try
{
//store head and tail positions in buffer,
Segment head, tail;
int headLow, tailHigh;
GetHeadTailPositions(out head, out tail, out headLow, out tailHigh);
if (head == tail)
{
head.AddToList(list, headLow, tailHigh);
}
else
{
head.AddToList(list, headLow, SEGMENT_SIZE - 1);
Segment curr = head.Next;
while (curr != tail)
{
curr.AddToList(list, 0, SEGMENT_SIZE - 1);
curr = curr.Next;
}
//Add tail segment
tail.AddToList(list, 0, tailHigh);
}
}
finally
{
// This Decrement must happen after copying is over.
Interlocked.Decrement(ref m_numSnapshotTakers);
}
return list;
}
If a snapshot is all you want, then you are in luck. However, there is seemingly no built in way to get and remove all the items from a ConcurrentQueue in a thread safe manner. You will need to bake your own synchronisation by using lock or similar. Or roll your own (which might not be all that difficult looking at the source).

There is no such method, because it is ambiguous what TakeEverything should actually do:
Take item by item until Queue is empty and then return taken items.
Lock whole access to the queue, take a snapshot (take all items in a loop) = clear the queue, unlock, return the snapshot.
Consider first scenario and imagine that other threads are writing to the queue at the time you are removing the items one by one from the queue - should TakeEverything method include those in the result?
If yes then you can just write it as:
public List<GeneratedItem> TakeEverything()
{
var list = new List<GeneratedItem>();
while (queuedItems.TryDequeue(out var item))
{
list.Add(item);
}
return list;
}
If no then I would still use ConcurrentQueue (because all the instance members - methods and properties - from ordinary Queue are not thread safe) and implement custom lock for every read/write access, so you make sure you are not adding items while "taking everything" from the queue.

Related

Singleton object creation on multiple different threads using TPL

This is just a self learning situation. I am pretty new to TPL and Threading. Anyways, I am using a generic Singleton class and creating ~10K instances to check if my code is returning same instance or creating new instance everytime. I am creating instances asynchronously using Task Factory inside a for loop. To validate the creation of instance, i am returning a string having these info as a list of string:
Iteration Counter
Name of instance
Hashcode of instance and
ThreadId
and displaying the list of strings to listbox.
My Queries
On running, I found few things,
the value of i inside the for loop is getting duplicated for the different intances
for those 10K iterations, i have only 8-9 threads created, instead of expected 10k threads. I was expectig 10K threads to popup , do their individual task and then disappear gracefully.
Can I use this in my projects, as class libraries, irrespective of the platforms - Web, Windows or Mobile?
Please do leave a note on my 1OK threads thoughts :). Whether its a good idea / bad idea on multithreading?
My code
Singleton Class
public sealed class Singleton<T> where T : class
{
static Singleton() { }
private Singleton() { }
public static T Instance { get; } = Activator.CreateInstance<T>();
}
Class: SingletonInThread
public class SingletonInThread
{
/// <summary>
/// Method responsible for creation of same instance, 10K times using Task.Factory.StartNew()
/// </summary>
/// <returns></returns>
public async Task<IEnumerable<string>> LoopAsync()
{
List<Task<string>> list = new List<Task<string>>();
for (int i = 0; i <= 9999; i++)
{
list.Add(Task.Factory.StartNew(()=> CreateAndLogInstances(i)));
}
return await Task.WhenAll<string>(list);
}
/// <summary>
/// Creates new instance of Logger and logs its creation with few details. Kind of Unit of Work.
/// </summary>
/// <param name="i"></param>
/// <returns></returns>
private string CreateAndLogInstances(int i)
{
var instance = Singleton<Logger>.Instance;
return $"Instance{i}. Name of instance= {instance.ToString()} && Hashcode ={instance.GetHashCode()} && ThreadId= {Thread.CurrentThread.ManagedThreadId}";
}
}
Frontend
_
On UI side, On buttonclick event, populating listbox
private async void button1_Click(object sender, EventArgs e)
{
IEnumerable<string> list = await new SingletonInThread().LoopAsync();
foreach (var item in list)
{
listBox1.Items.Add(item);
}
}
Also, I noticed one thing that my UI gets blocked while populating list box with 10K items. Please do help me populating it asynchronous way. I knew the bgworker, begininvoke and methodinvoker. Is there anything other than the too in TPL??
Output
---
Update
As suggested if I use Parallel.For,then, instead of 10K strings, I am getting a random figure of 9491, 9326 etc. I.e. less than 10K. I dont know why????
Here's my updated code for LoopAsync method using Parallel.For
public IEnumerable<string> LoopAsync()
{
List<string> list = new List<string>();
Parallel.For(0, 9999, i =>
{
list.Add( CreateAndLogInstances(i));
});
return list;
}
the value of i inside the for loop is getting duplicated for the different intances
This doesn't have anything to do with threading/parallel/asynchrony or singleton instances. You're seeing this because closures capture variables, not values. So this code:
for (int i = 0; i <= 9999; i++)
{
list.Add(Task.Factory.StartNew(()=> CreateAndLogInstances(i)));
}
is passing the variable i to the closure () => CreateAndLogInstances(i), not the current value of i. To capture the current value and use that in your closure, you would need a separate variable per closure, as recommended in a comment:
for (int i = 0; i <= 9999; i++)
{
var index = i;
list.Add(Task.Factory.StartNew(()=> CreateAndLogInstances(index)));
}
for those 10K iterations, i have only 8-9 threads created, instead of expected 10k threads. I was expectig 10K threads to popup , do their individual task and then disappear gracefully.
No, you would very much not want that to happen. Thread creation and destruction has a lot of overhead. StartNew and Parallel queue work to the thread pool, and the thread pool will grow quickly to a certain point and then grow slowly, on purpose. This is because on, e.g., an 8-core machine, there is no point in having 10k threads because they cannot all run anyway.
Can I use this in my projects, as class libraries, irrespective of the platforms - Web, Windows or Mobile?
I never recommend using parallel processing on web applications, because your web host has already parallelized your requests. So doing additional parallel processing tends to burden your web server and potentially make it much less responsive to other requests.
Also, I noticed one thing that my UI gets blocked while populating list box with 10K items. Please do help me populating it asynchronous way.
You normally want to avoid making 10k UI updates at practically the same time. Parallel processing doesn't help with a UI because all UI updates have to be done on the UI thread. Either put all the results in the list with a single call, or use something like control virtualization.
Adding the same object to a WinForms list box multiple times results in multiple lines in the list box, e.g.:
private void Form1_Load(object sender, EventArgs e)
{
string foo = "Hello, world";
listBox1.Items.Add(foo);
listBox1.Items.Add(foo);
listBox1.Items.Add(foo);
}
yields three lines proclaiming Hello, world. So, it isn't unexpected that you receive 10,000 lines in your example. But are they the same object, or are you creating multiple objects?
I created my own Logger class:
public class Logger
{
static private Random rnd = new Random();
public int Id { get; } = rnd.Next();
public override string ToString()
{
return Id.ToString();
}
}
Indeed, each output line has the same Id, thus indicating the same object instance was used in each case. You also output the call to GetHashCode(), which also is the same in each case, indicating a high probability that you are dealing with only one instance.

Is there such a synchronization tool as "single-item-sized async task buffer"?

Many times in UI development I handle events in such a way that when an event first comes - I immediately start processing, but if there is one processing operation in progress - I wait for it to complete before I process another event. If more than one event occurs before the operation completes - I only process the most recent one.
The way I typically do that my process method has a loop and in my event handler I check a field that indicates if I am currently processing something and if I am - I put my current event arguments in another field that is basically a one item sized buffer and when current processing pass completes - I check if there is some other event to process and I loop until I am done.
Now this seems a bit too repetitive and possibly not the most elegant way to do it, though it seems to otherwise work fine for me. I have two questions then:
Does what I need to do have a name?
Is there some reusable synchronization type out there that could do that for me?
I'm thinking of adding something to the set of async coordination primitives by Stephen Toub that I included in my toolkit.
So first, we'll handle the case that you described in which the method is always used from the UI thread, or some other synchronization context. The Run method can itself be async to handle all of the marshaling through the synchronization context for us.
If we're running we just set the next stored action. If we're not, then we indicate that we're now running, await the action, and then continue to await the next action until there is no next action. We ensure that whenever we're done we indicate that we're done running:
public class EventThrottler
{
private Func<Task> next = null;
private bool isRunning = false;
public async void Run(Func<Task> action)
{
if (isRunning)
next = action;
else
{
isRunning = true;
try
{
await action();
while (next != null)
{
var nextCopy = next;
next = null;
await nextCopy();
}
}
finally
{
isRunning = false;
}
}
}
private static Lazy<EventThrottler> defaultInstance =
new Lazy<EventThrottler>(() => new EventThrottler());
public static EventThrottler Default
{
get { return defaultInstance.Value; }
}
}
Because the class is, at least generally, going to be used exclusively from the UI thread there will generally need to be only one, so I added a convenience property of a default instance, but since it may still make sense for there to be more than one in a program, I didn't make it a singleton.
Run accepts a Func<Task> with the idea that it would generally be an async lambda. It might look like:
public class Foo
{
public void SomeEventHandler(object sender, EventArgs args)
{
EventThrottler.Default.Run(async () =>
{
await Task.Delay(1000);
//do other stuff
});
}
}
Okay, so, just to be verbose, here is a version that handles the case where the event handlers are called from different threads. I know you said that you assume they're all called from the UI thread, but I generalized it a bit. This means locking over all access to instance fields of the type in a lock block, but not actually executing the function inside of a lock block. That last part is important not just for performance, to ensure we're not blocking items from just setting the next field, but also to avoid issues with that action also calling run, so that it doesn't need to deal with re-entrancy issues or potential deadlocks. This pattern, of doing stuff in a lock block and then responding based on conditions determined in the lock means setting local variables to indicate what should be done after the lock ends.
public class EventThrottlerMultiThreaded
{
private object key = new object();
private Func<Task> next = null;
private bool isRunning = false;
public void Run(Func<Task> action)
{
bool shouldStartRunning = false;
lock (key)
{
if (isRunning)
next = action;
else
{
isRunning = true;
shouldStartRunning = true;
}
}
Action<Task> continuation = null;
continuation = task =>
{
Func<Task> nextCopy = null;
lock (key)
{
if (next != null)
{
nextCopy = next;
next = null;
}
else
{
isRunning = false;
}
}
if (nextCopy != null)
nextCopy().ContinueWith(continuation);
};
if (shouldStartRunning)
action().ContinueWith(continuation);
}
}
Does what I need to do have a name?
What you're describing sounds a bit like a trampoline combined with a collapsing queue. A trampoline is basically a loop that iteratively invokes thunk-returning functions. An example is the CurrentThreadScheduler in the Reactive Extensions. When an item is scheduled on a CurrentThreadScheduler, the work item is added to the scheduler's thread-local queue, after which one of the following things will happen:
If the trampoline is already running (i.e., the current thread is already processing the thread-local queue), then the Schedule() call returns immediately.
If the trampoline is not running (i.e., no work items are queued/running on the current thread), then the current thread begins processing the items in the thread-local queue until it is empty, at which point the call to Schedule() returns.
A collapsing queue accumulates items to be processed, with the added twist that if an equivalent item is already in the queue, then that item is simply replaced with the newer item (resulting in only the most recent of the equivalent items remaining in the queue, as opposed to both). The idea is to avoid processing stale/obsolete events. Consider a consumer of market data (e.g., stock ticks). If you receive several updates for a frequently traded security, then each update renders the earlier updates obsolete. There is likely no point in processing earlier ticks for the same security if a more recent tick has already arrived. Thus, a collapsing queue is appropriate.
In your scenario, you essentially have a trampoline processing a collapsing queue with for which all incoming events are considered equivalent. This results in an effective maximum queue size of 1, as every item added to a non-empty queue will result in the existing item being evicted.
Is there some reusable synchronization type out there that could do that for me?
I do not know of an existing solution that would serve your needs, but you could certainly create a generalized trampoline or event loop capable of supporting pluggable scheduling strategies. The default strategy could use a standard queue, while other strategies might use a priority queue or a collapsing queue.
What you're describing sounds very similar to how TPL Dataflow's BrodcastBlock behaves: it always remembers only the last item that you sent to it. If you combine it with ActionBlock that executes your action and has capacity only for the item currently being processed, you get what you want (the method needs a better name):
// returns send delegate
private static Action<T> CreateProcessor<T>(Action<T> executedAction)
{
var broadcastBlock = new BroadcastBlock<T>(null);
var actionBlock = new ActionBlock<T>(
executedAction, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
broadcastBlock.LinkTo(actionBlock);
return item => broadcastBlock.Post(item);
}
Usage could be something like this:
var processor = CreateProcessor<int>(
i =>
{
Console.WriteLine(i);
Thread.Sleep(i);
});
processor(100);
processor(1);
processor(2);
Output:
100
2

Exit a loop if another thread enter in the

I've a multi-threading issue.
I've a method that is called to make refresh on several items.
In this method, I iterate on a list of items and refresh one of it's property.
The list has a lot of elements and we have to do some math to compute it's property.
The current code of this operation look like this:
public void AddItemsWithLayoutRefresh(IEnumerable<MyItem> items){
_control.Invoke(()=>{
AddItems(items);
for(int i =0;i<_guiItems.Count;i++){
//The goal is to have a condition here to "break" the loop and let the next call to RefreshLayout proceed
_guiItems[i].Propriety = ComputePropriety(_guiItems[i]);
}
});
}
The problem is that I may have 4 call, which are currently just blocking on the Invoke.
I've to finish the "AddItems" methods, but concerning everything that is in the "for" loop, I can abort this without any issue if I know that it will be executed just after.
But how to do this in a thread-safe way?
If I put a private bool _isNewRefreshHere;, set to true before entering the Invoke, then checking in the Invoke, I've no warranty that there is not already two call that have reach the Invoke BEFORE I check it in the for loop.
So how can I break when being in my loop when a new call is made to my method?
Solution
Based on Andrej Mohar's answer, I did the following:
private long m_refreshQueryCount;
public void AddItemsWithLayoutRefresh(IEnumerable<MyItem> items){
Interlocked.Increment(ref m_refreshQueryCount);
_control.Invoke(()=>{
Interlocked.Decrement(ref m_refreshQueryCount);
AddItems(items);
for(int i =0;i<_guiItems.Count;i++){
if (Interlocked.Read(ref m_refreshQueryCount) > 0)
{
break;
}
_guiItems[i].Propriety = ComputePropriety(_guiItems[i]);
}
});
}
Which seems to work very nicely
If I were you, I'd try to make a thread-safe waiting counter. You can use Interlocked methods like Increment and Decrement. What these basically do is they increment the value as an atomic operation, which is considered to be thread-safe. So you increase the variable before the Invoke call. This will allow you to know how many threads are in the waiting queue. You decrement the variable after the for loop finishes and before the ending of the Invoke block. You can then check inside the for statement for the number of waiting threads and break the for if the number is greater than 1. This way you should know exactly how many threads are in the execution chain.
I would do it in the following way:
private readonly object _refresherLock = new object();
private bool _isNewRefreshHere = false;
private AutoResetEvent _refresher = new AutoResetEvent(true);
public void AddItemsWithLayoutRefresh(IEnumerable<MyItem> items)
{
lock (_refresherLock)
{
if (_isNewRefreshHere)
{
return;
}
_isNewRefreshHere = true;
}
_refresher.WaitOne();
_isNewRefreshHere = false;
_control.Invoke(() =>
{
AddItems(items);
for (int i = 0; i < _guiItems.Count && !_isNewRefreshHere; i++)
{
_guiItems[i].Propriety = ComputePropriety(_guiItems[i]);
}
_refresher.Set();
});
}
That is:
You can always cancel the current updation with a new one.
You cannot queue up more than one updation at a time.
You are guaranteed to have no cross-threading conflicts.
You should test that code since I did not. :)

Share a List<T> between multiple threads

Am I right in saying that I only need to use lock to Add/Remove/Change the List, or do I also need to lock it when iterating over it?
So am I thread safe by doing this:
class ItemsList
{
List<int> items = new List<int>();
object listLock = new object();
public void Add(int item)
{
lock (listLock)
{
items.Add(item);
}
}
public void Remove(int item)
{
lock (listLock)
{
items.Remove(item);
}
}
public void IncrementAll()
{
foreach (var item in items)
{
item += 1;
}
}
}
You should definitely lock when iterating over it too - if the list is changed while you're iterating over it, an exception will be thrown.
From the docs for List<T>.GetEnumerator:
The enumerator does not have exclusive access to the collection; therefore, enumerating through a collection is intrinsically not a thread-safe procedure. To guarantee thread safety during enumeration, you can lock the collection during the entire enumeration. To allow the collection to be accessed by multiple threads for reading and writing, you must implement your own synchronization.
Additionally, even a single read from a List<T> isn't thread-safe if you could be writing to it as well - even if it doesn't fail, there's no guarantee that you'll get the most recent value.
Basically, List<T> is only safe for multiple threads if it's not written to after the last point at which its state becomes visible to all threads.
If you want a thread-safe collection, and if you're using .NET 4 or higher, take a look at the System.Collections.Concurrent namespace.
List<T> is not thread-safe generally. Having multiple readers will not cause any issues, however, you cannot write to the list while it is being read. So you would need to lock on both read and write or use something like a System.Threading.ReaderWriterLock (which allows multiple readers but only one writer). If you are developing under .NET 4.0 or bigger, you could use a BlockingCollection instead, which is a thread safe collection.
No, that isn't safe. You will get a "collection was modified" kind of exception if another thread modifies it while you are reading it.
The most efficient way to fix this is to use a ReaderWriterLockSlim to control access, so that multiple threads can be reading it simultaneously, and it will only get locked when something tries to modify it.
You're not even thread safe with what you have if you never iterate it.
You need to define what types of operations you are doing with the data structure before we can discuss whether or not it will work as intended.
In the general case though, you do need to lock while reading. As it is, someone could add an item while you're in the middle of iterating and it would break all kinds of things. Even reading a single item could be broken if you added an item in the middle of the read.
Also note that this would, at best, make each operation logically atomic. If you're ever performing multiple operations and making assumptions about the state of the data structure then that won't be enough.
In many cases, to resolve this issue, you need to do your locking on the caller side, rather than just wrapping each operation in a lock.
You should probably use a ReaderWriterLockSlim so that multiple threads can read the collection, but only one can modify it.
On IncrementAll you will catch InvalidOperationException because of the changes, made in collection. You can see it in test unit, like this:
ItemsList il = new ItemsList();
Task ts = new Task(() =>
{
for (int i = 0; i < 100000; i++)
{
il.Add(i);
System.Threading.Thread.Sleep(100);
}
}
);
ts.Start();
Task ts2 = new Task(() =>
{
//DoSomeActivity
il.IncrementAll();
}
);
ts2.Start();
Console.Read();
Iteration must be locked also!!!
You might want to take a look at ConcurrentQueue<>();
This is basically a thread safe list (As far as i'm aware), that's rather handy. You can use it a bit like this;
public ConcurrentQueue<yourType> alarmQueue = new ConcurrentQueue<yourType>();
System.Timers.Timer timer;
public QueueManager()
{
timer = new System.Timers.Timer(1000);
timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed);
timer.Enabled = true;
}
void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
DeQueueAlarm();
}
private void DeQueueAlarm()
{
yourType yourtype;
while (alarmQueue.TryDequeue(out yourtype))
{
//dostuff
}
}
edit: Just as John said, this is available in .Net4 onwards. Read more here; http://msdn.microsoft.com/en-us/library/dd267265.aspx

Multi-threading problem when checking the list Count property

I have List newJobs. Some threads add items to that list and other thread removes items from it, if it's not empty. I have ManualResetEvent newJobEvent which is set when items are added to the list, and reset when items are removed from it:
Adding items to the list is performed in the following way:
lock(syncLock){
newJobs.Add(job);
}
newJobEvent.Set();
Jobs removal is performed in the following way:
if (newJobs.Count==0)
newJobEvent.WaitOne();
lock(syncLock){
job = newJobs.First();
newJobs.Remove(job);
/*do some processing*/
}
newJobEvent.Reset();
When the line
job=newJobs.First()
is executed I sometimes get an exception that the list is empty. I guess that the check:
if (newJobs.Count==0)
newJobEvent.WaitOne();
should also be in the lock statement but I'm afraid of deadlocks on the line newJobEvent.WaitOne();
How can I solve it?
Many thanks and sorry for the long post!
You are right. Calling WaitOne inside a lock could lead to a deadlock. And the check to see if the list is empty needs to be done inside the lock otherwise there could be a race with another thread trying to remove an item. Now, your code looks suspiciously like the producer-consumer pattern which is usually implemented with a blocking queue. If you are using .NET 4.0 then you can take advantage of the BlockingCollection class.
However, let me go over a couple of ways you can do it youself. The first uses a List and a ManualResetEvent to demonstrate how this could be done using the data structures in your question. Notice the use of a while loop in the Take method.
public class BlockingJobsCollection
{
private List<Job> m_List = new List<Job>();
private ManualResetEvent m_Signal = new ManualResetEvent(false);
public void Add(Job item)
{
lock (m_List)
{
m_List.Add(item);
m_Signal.Set();
}
}
public Job Take()
{
while (true)
{
lock (m_List)
{
if (m_List.Count > 0)
{
Job item = m_List.First();
m_List.Remove(item);
if (m_List.Count == 0)
{
m_Signal.Reset();
}
return item;
}
}
m_Signal.WaitOne();
}
}
}
But this not how I would do it. I would go with the simplier solution below with uses Monitor.Wait and Monitor.Pulse. Monitor.Wait is useful because it can be called inside a lock. In fact, it is suppose to be done that way.
public class BlockingJobsCollection
{
private Queue<Job> m_Queue = new Queue<Job>();
public void Add(Job item)
{
lock (m_Queue)
{
m_Queue.Enqueue(item);
Monitor.Pulse(m_Queue);
}
}
public Job Take()
{
lock (m_Queue)
{
while (m_Queue.Count == 0)
{
Monitor.Wait(m_Queue);
}
return m_Queue.Dequeue();
}
}
}
Not answering your question, but if you are using .NET framework 4, you can use the new ConcurrentQueue which does all the locking for you.
Regarding your question:
One scenario that I can think of causing such a problem is the following:
The insertion thread enters the lock, calls newJob.Add, leaves the lock.
Context switch to the removal thread. It checks for emptyness, sees an item, enters the locked area, removes the item, resets the event - which hasn't even been set yet.
Context switch back to the insertion thread, the event is set.
Context switch back to the removal thread. It checks for emptyness, sees no items, waits for the event - which is already set, trys to get the first item... Bang!
Set and reset the event inside the lock and you should be fine.
I don't see why object removal in case of zero objects should wait for one to be added and then remove it. It looks to be being against logic.

Categories