I have a System.Collections.Generic.List<T> to which I only ever add items in a timer callback. The timer is restarted only after the operation completes.
I have a System.Collections.Concurrent.ConcurrentQueue<T> which stores indices of added items in the list above. This store operation is also always performed in the same timer callback described above.
Is a read operation that iterates the queue and accesses the corresponding items in the list thread safe?
Sample code:
private List<Object> items;
private ConcurrentQueue<int> queue;
private Timer timer;
private void callback(object state)
{
int index = items.Count;
items.Add(new object());
if (true)//some condition here
queue.Enqueue(index);
timer.Change(TimeSpan.FromMilliseconds(500), TimeSpan.FromMilliseconds(-1));
}
//This can be called from any thread
public IEnumerable<object> AccessItems()
{
foreach (var index in queue)
{
yield return items[index];
}
}
My understanding:
Even if the list is resized when it is being indexed, I am only accessing an item that already exists, so it does not matter whether it is read from the old array or the new array. Hence this should be thread-safe.
Is a read operation that iterates the queue and accesses the corresponding items in the list thread safe?
Is it documented as being thread safe?
If no, then it is foolish to treat it as thread safe, even if it is in this implementation by accident. Thread safety should be by design.
Sharing memory across threads is a bad idea in the first place; if you don't do it then you don't have to ask whether the operation is thread safe.
If you have to do it then use a collection designed for shared memory access.
If you can't do that then use a lock. Locks are cheap if uncontended.
If you have a performance problem because your locks are contended all the time then fix that problem by changing your threading architecture rather than trying to do dangerous and foolish things like low-lock code. No one writes low-lock code correctly except for a handful of experts. (I am not one of them; I don't write low-lock code either.)
Even if the list is resized when it is being indexed, I am only accessing an item that already exists, so it does not matter whether it is read from the old array or the new array.
That's the wrong way to think about it. The right way to think about it is:
If the list is resized then the list's internal data structures are being mutated. It is possible that the internal data structure is mutated into an inconsistent form halfway through the mutation, that will be made consistent by the time the mutation is finished. Therefore my reader can see this inconsistent state from another thread, which makes the behaviour of my entire program unpredictable. It could crash, it could go into an infinite loop, it could corrupt other data structures, I don't know, because I'm running code that assumes a consistent state in a world with inconsistent state.
Big edit
The ConcurrentQueue is only safe with regard to the Enqueue(T) and T Dequeue() operations.
You're doing a foreach on it and that doesn't get synchronized at the required level.
The biggest problem in your particular case is the fact the enumerating of the Queue (which is a Collection in it's own right) might throw the wellknown "Collection has been modified" exception. Why is that the biggest problem ? Because you are adding things to the queue after you've added the corresponding objects to the list (there's also a great need for the List to be synchronized but that + the biggest problem get solved with just one "bullet"). While enumerating a collection it is not easy to swallow the fact that another thread is modifying it (even if on a microscopic level the modification is a safe - ConcurrentQueue does just that).
Therefore you absolutely need synchronize the access to the queues (and the central List while you're at it) using another means of synchronization (and by that I mean you can also forget abount ConcurrentQueue and use a simple Queue or even a List since you never Dequeue things).
So just do something like:
public void Writer(object toWrite) {
this.rwLock.EnterWriteLock();
try {
int tailIndex = this.list.Count;
this.list.Add(toWrite);
if (..condition1..)
this.queue1.Enqueue(tailIndex);
if (..condition2..)
this.queue2.Enqueue(tailIndex);
if (..condition3..)
this.queue3.Enqueue(tailIndex);
..etc..
} finally {
this.rwLock.ExitWriteLock();
}
}
and in the AccessItems:
public IEnumerable<object> AccessItems(int queueIndex) {
Queue<object> whichQueue = null;
switch (queueIndex) {
case 1: whichQueue = this.queue1; break;
case 2: whichQueue = this.queue2; break;
case 3: whichQueue = this.queue3; break;
..etc..
default: throw new NotSupportedException("Invalid queue disambiguating params");
}
List<object> results = new List<object>();
this.rwLock.EnterReadLock();
try {
foreach (var index in whichQueue)
results.Add(this.list[index]);
} finally {
this.rwLock.ExitReadLock();
}
return results;
}
And, based on my entire understanding of the cases in which your app accesses the List and the various Queues, it should be 100% safe.
End of big edit
First of all: What is this thing you call Thread-Safe ? by Eric Lippert
In your particular case, I guess the answer is no.
It is not the case that inconsistencies might arrise in the global context (the actual list).
Instead it is possible that the actual readers (who might very well "collide" with the unique writer) end up with inconsistencies in themselves (their very own Stacks meaning: local variables of all methods, parameters and also their logically isolated portion of the heap)).
The possibility of such "per-Thread" inconsistencies (the Nth thread wants to learn the number of elements in the List and finds out that value is 39404999 although in reality you only added 3 values) is enough to declare that, generally speaking that architecture is not thread-safe ( although you don't actually change the globally accessible List, simply by reading it in a flawed manner ).
I suggest you use the ReaderWriterLockSlim class.
I think you will find it fits your needs:
private ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);
private List<Object> items;
private ConcurrentQueue<int> queue;
private Timer timer;
private void callback(object state)
{
int index = items.Count;
this.rwLock.EnterWriteLock();
try {
// in this place, right here, there can be only ONE writer
// and while the writer is between EnterWriteLock and ExitWriteLock
// there can exist no readers in the following method (between EnterReadLock
// and ExitReadLock)
// we add the item to the List
// AND do the enqueue "atomically" (as loose a term as thread-safe)
items.Add(new object());
if (true)//some condition here
queue.Enqueue(index);
} finally {
this.rwLock.ExitWriteLock();
}
timer.Change(TimeSpan.FromMilliseconds(500), TimeSpan.FromMilliseconds(-1));
}
//This can be called from any thread
public IEnumerable<object> AccessItems()
{
List<object> results = new List<object>();
this.rwLock.EnterReadLock();
try {
// in this place there can exist a thousand readers
// (doing these actions right here, between EnterReadLock and ExitReadLock)
// all at the same time, but NO writers
foreach (var index in queue)
{
this.results.Add ( this.items[index] );
}
} finally {
this.rwLock.ExitReadLock();
}
return results; // or foreach yield return you like that more :)
}
No because you are reading and writing to/from the same object concurrently. This is not documented to be safe so you can't be sure it is safe. Don't do it.
The fact that it is in fact unsafe as of .NET 4.0 means nothing, btw. Even if it was safe according to Reflector it could change anytime. You can't rely on the current version to predict future versions.
Don't try to get away with tricks like this. Why not just do it in an obviously safe way?
As a side note: Two timer callbacks can execute at the same time, so your code is doubly broken (multiple writers). Don't try to pull off tricks with threads.
It is thread-safish. The foreach statement uses the ConcurrentQueue.GetEnumerator() method. Which promises:
The enumeration represents a moment-in-time snapshot of the contents of the queue. It does not reflect any updates to the collection after GetEnumerator was called. The enumerator is safe to use concurrently with reads from and writes to the queue.
Which is another way of saying that your program isn't going to blow up randomly with an inscrutable exception message like the kind you'll get when you use the Queue class. Beware of the consequences though, implicit in this guarantee is that you may well be looking at a stale version of the queue. Your loop will not be able to see any elements that were added by another thread after your loop started executing. That kind of magic doesn't exist and is impossible to implement in a consistent way. Whether or not that makes your program misbehave is something you will have to think about and can't be guessed from the question. It is pretty rare that you can completely ignore it.
Your usage of the List<> is however utterly unsafe.
Related
I have a method which reads a text file which contains an int value per line, for making reading faster, i used Parallel.ForEach, but the behaviour what i am seeing is unexpected, i have 800 lines in the file but when i run this method, every time it returns different count of HashSet, what i have read after searching is Parallel.ForEach spawns multiple threads and it returns the result when all threads have completed their work, but my code execute contradicts, or i am missing something improtant here?
Here is my method:
private HashSet<int> GetKeyItemsProcessed()
{
HashSet<int> keyItems = new HashSet<int>();
if (!File.Exists(TrackingFilePath))
return keyItems;
// normal foreach works fine
//foreach(var keyItem in File.ReadAllLines(TrackingFilePath))
//{
// keyItems.Add(int.Parse(keyItem));
//}
// this does not return right number of hashset rows
Parallel.ForEach(File.ReadAllLines(TrackingFilePath).AsParallel(), keyItem =>
{
keyItems.Add(int.Parse(keyItem));
});
return keyItems;
}
HashSet.Add is NOT thread safe.
From MSDN:
Any public static (Shared in Visual Basic) members of this type are
thread safe. Any instance members are not guaranteed to be thread
safe.
The unpredictability of multithread timing could, and seems to be, causing issues.
You could wrap the access in a synchronization construct, which is sometimes faster than a concurrent collection, but may not speed anything up in some cases. As others have mentioned, another option is to use a thread safe collection like ConcurrenDictionary or ConcurrentQueue, though those may have additional memory overhead.
Be sure to benchmark any results you get with regards to timing. The raw power of singlethreaded access can sometimes be faster than dealing with the overhead of threading. It may not be worth it at all to thread this code.
The final word though, is that HashSet alone, without synchronization, is simply unacceptable for multi threaded operations.
I have a heavily threaded application with a ReadOnlyCollection as follows:
internal static ReadOnlyCollection<DistributorBackpressure44> DistributorBackpressure44Cache
{
get
{
return _distributorBackpressure44;
}
set
{
_distributorBackpressure44 = value;
}
}
I have one place in the app where this collection is replaced (always on a separate thread) and it looks like this:
CicApplication.DistributorBackpressure44Cache = new ReadOnlyCollection<DistributorBackpressure44>(someQueryResults.ToList());
I have many places in the code where this collection is accessed, usually via Linq queries, in many different threads. The code often looks something like this:
foreach (DistributorBackpressure44 distributorBackpressure44 in CicApplication.DistributorBackpressure44Cache.Where(row => row.Coater == coater && row.CoaterTime >= targetTime).ToList())
{
...
...
}
I assume what I'm doing is thread-safe, without the need to do any locking? What I'm not sure about is what happens with the query above if it occurs at the exact same time the collection is getting replaced in a different thread?
Reference assignments are atomic, so yes, it is thread safe. But only as long as you don't rely on the data to be ready to be read exactly the moment after it is written. This is because of caching, you might want to throw in a volatile to prevent that.
See also reference assignment is atomic so why is Interlocked.Exchange(ref Object, Object) needed?.
It is pretty unlikely. At an entirely unpredictable moment in time, after the thread assigns the property, other threads are going to see the new collection. They'll read a stale value before that. Which might be null or might be another collection with entirely different content.
The randomness is what will get you in trouble. Do note that this has nothing to do with whether or not the collection is ReadOnly. Maybe it is okay that the threads use a stale value, but that isn't very common. Most of all you didn't mention it was okay so you might not yet have considered the consequences. You will need to think this through. There isn't anything you can do about that yourself in the property getter and setter, the threads are going to have to negotiate between themselves. This is what makes threading hard and makes it very important that you thoroughly analyzes the places in the code where mutable data is shared. Like this property.
It sounds like it won't be a problem in your case, but if the underlying collection that the ReadOnlyCollection is wrapping changes, you could run into problems.
For example, the following code chunk will throw an InvalidOperationException with the message "Collection was modified; enumeration operation may not execute" since the underlying List<int> has an item removed from it while it is being enumerated over on another thread.
var numbers = new List<int>() {1,2,3,4,5};
var readOnly = new ReadOnlyCollection<int>(numbers);
ThreadPool.QueueUserWorkItem(x => {
foreach (int number in readOnly)
{
Console.WriteLine(number);
Thread.Sleep(300);
}
});
Thread.Sleep(150);
numbers.Remove(2);
I know this has been asked before (and I will keep researching), but I need to know how to make a particular linked list function in a thread safe manner. My current issue is that I have one thread that loops through all elements in a linked list, and another may add more elements to the end of this list. Sometimes it happens that the one thread tries to add another element to the list while the first is busy iterating through it (which causes an exception).
I was thinking of just adding a variable (boolean flag) to say that the list is currently busy being iterated through, but then how do I check it and wait with the second thread (it is ok if it waits, as the first thread runs pretty quickly). The only way I can think of doing this is through the use of a while loop constantly checking this busy flag. I realized this was a very dumb idea as it would cause the CPU to work hard while doing nothing useful. And now I am here to ask for a better insight. I have read about locks and so on, but it does not seem to be relevant in my case, but perhaps I am wrong?
In the meanwhile I'll keep searching the internet and post back if I find a solution.
EDIT:
Let me know if I should post some code to clear things up, but I'll try and explain it more clearly.
So I have a class with a linked list in it that contains elements that require processing. I have one thread that iterates through this list through a function call (let's call it "processElements"). I have a second thread that adds elements to process in a non-deterministic manner. However, sometimes it happens that it tries to call this addElement function while the processElements is running. This means that the an element is being added to the linked list while it is being iterated through by the first thread. This is not possible and causes an exception. Hope this clears it up.
I need the thread that adds new elements to yield until the processElements method is done executing.
To anyone stumbling on this problem. The accepted answer will give you a quick, an easy solution, but check out Brian Gideon's answer below for a more comprehensive answer, which will definitely give you more insight!
The exception is likely the result of having the collection changed in the middle of an iteration via IEnumerator. There are few techniques you can use to maintain thread-safety. I will present them in order of difficultly.
Lock Everything
This is by far the easiest and most trivial method for getting access to the data structure thread-safe. This pattern works well when the number of read and write operations are equally matched.
LinkedList<object> collection = new LinkedList<object>();
void Write()
{
lock (collection)
{
collection.AddLast(GetSomeObject());
}
}
void Read()
{
lock (collection)
{
foreach (object item in collection)
{
DoSomething(item);
}
}
}
Copy-Read Pattern
This is a slightly more complex pattern. You will notice that a copy of the data structure is made prior to reading it. This pattern works well when the number of read operations are few compared to the number of writes and the penalty of the copy is relatively small.
LinkedList<object> collection = new LinkedList<object>();
void Write()
{
lock (collection)
{
collection.AddLast(GetSomeObject());
}
}
void Read()
{
LinkedList<object> copy;
lock (collection)
{
copy = new LinkedList<object>(collection);
}
foreach (object item in copy)
{
DoSomething(item);
}
}
Copy-Modify-Swap Pattern
And finally we have the most complex and error prone pattern. I actually do not recommend using this pattern unless you really know what you are doing. Any deviation from what I have below could lead to problems. It is easy to mess this one up. In fact, I have inadvertently screwed this one up as well in the past. You will notice that a copy of the data structure is made prior to all modifications. The copy is then modified and finally the original reference is swapped out with the new instance. Basically we are always treating collection as if it were immutable. This pattern works well when the number of write operations are few compared to the number of reads and the penalty of the copy is relatively small.
object lockobj = new object();
volatile LinkedList<object> collection = new LinkedList<object>();
void Write()
{
lock (lockobj)
{
var copy = new LinkedList<object>(collection);
copy.AddLast(GetSomeObject());
collection = copy;
}
}
void Read()
{
LinkedList<object> local = collection;
foreach (object item in local)
{
DoSomething(item);
}
}
Update:
So I posed two questions in the comment section:
Why lock(lockobj) instead of lock(collection) on the write side?
Why local = collection on the read side?
Concerning the first question consider how the C# compiler will expand the lock.
void Write()
{
bool acquired = false;
object temp = lockobj;
try
{
Monitor.Enter(temp, ref acquired);
var copy = new LinkedList<object>(collection);
copy.AddLast(GetSomeObject());
collection = copy;
}
finally
{
if (acquired) Monitor.Exit(temp);
}
}
Now hopefully it is easier to see what can go wrong if we used collection as the lock expression.
Thread A executes object temp = collection.
Thread B executes collection = copy.
Thread C executes object temp = collection.
Thread A acquires the lock with the original reference.
Thread C acquires the lock with the new reference.
Clearly this would be disasterous! Writes would get lost since the critical section is entered more than once.
Now the second question was a little tricky. You do not necessarily have to do this with the code I posted above. But, that is because I used the collection only once. Now consider the following code.
void Read()
{
object x = collection.Last;
// The collection may get swapped out right here.
object y = collection.Last;
if (x != y)
{
Console.WriteLine("It could happen!");
}
}
The problem here is that collection could get swapped out at anytime. This would be an incredibly difficult bug to find. This is why I always extract a local reference on the read side when doing this pattern. That ensure we are using the same collection on each read operation.
Again, because problems like these are so subtle I do not recommend using this pattern unless you really need to.
Here’s a quick example of how to use locks to synchronize your access to the list:
private readonly IList<string> elements = new List<string>();
public void ProcessElements()
{
lock (this.elements)
{
foreach (string element in this.elements)
ProcessElement(element);
}
}
public void AddElement(string newElement)
{
lock (this.elements)
{
this.elements.Add(element);
}
}
A lock(o) statement means that the executing thread should acquire a mutual-exclusion lock on the object o, execute the statement block, and finally release the lock on o. If another thread attempts to acquire a lock on o concurrently (either for the same code block or for any other), then it will block (wait) until the lock is released.
Thus, the crucial point is that you use the same object for all the lock statements that you want to synchronize. The actual object you use may be arbitrary, as long as it is consistent. In the example above, we’re declared our collection to be readonly, so we can safely use it as our lock. However, if this were not the case, you should lock on another object:
private IList<string> elements = new List<string>();
private readonly object syncLock = new object();
public void ProcessElements()
{
lock (this.syncLock)
{
foreach (string element in this.elements)
ProcessElement(element);
}
}
public void AddElement(string newElement)
{
lock (this.syncLock)
{
this.elements.Add(element);
}
}
In my app I have a List of objects. I'm going to have a process (thread) running every few minutes that will update the values in this list. I'll have other processes (other threads) that will just read this data, and they may attempt to do so at the same time.
When the list is being updated, I don't want any other process to be able to read the data. However, I don't want the read-only processes to block each other when no updating is occurring. Finally, if a process is reading the data, the process that updates the data must wait until the process reading the data is finished.
What sort of locking should I implement to achieve this?
This is what you are looking for.
ReaderWriterLockSlim is a class that will handle scenario that you have asked for.
You have 2 pair of functions at your disposal:
EnterWriteLock and ExitWriteLock
EnterReadLock and ExitReadLock
The first one will wait, till all other locks are off, both read and write, so it will give you access like lock() would do.
The second one is compatible with each other, you can have multiple read locks at any given time.
Because there's no syntactic sugar like with lock() statement, make sure you will never forget to Exit lock, because of Exception or anything else. So use it in form like this:
try
{
lock.EnterWriteLock(); //ReadLock
//Your code here, which can possibly throw an exception.
}
finally
{
lock.ExitWriteLock(); //ReadLock
}
You don't make it clear whether the updates to the list will involve modification of existing objects, or adding/removing new ones - the answers in each case are different.
To handling modification of existing items in the list, each object should handle it's own locking.
To allow modification of the list while others are iterating it, don't allow people direct access to the list - force them to work with a read/only copy of the list, like this:
public class Example()
{
public IEnumerable<X> GetReadOnlySnapshot()
{
lock (padLock)
{
return new ReadOnlyCollection<X>( MasterList );
}
}
private object padLock = new object();
}
Using a ReadOnlyCollection<X> to wrap the master list ensures that readers can iterate through a list of fixed content, without blocking modifications made by writers.
You could use ReaderWriterLockSlim. It would satisfy your requirements precisely. However, it is likely to be slower than just using a plain old lock. The reason is because RWLS is ~2x slower than lock and accessing a List would be so fast that it would not be enough to overcome the additional overhead of the RWLS. Test both ways, but it is likely ReaderWriterLockSlim will be slower in your case. Reader writer locks do better in scenarios were the number readers significantly outnumbers the writers and when the guarded operations are long and drawn out.
However, let me present another options for you. One common pattern for dealing with this type of problem is to use two separate lists. One will serve as the official copy which can accept updates and the other will serve as the read-only copy. After you update the official copy you must clone it and swap out the reference for the read-only copy. This is elegant in that the readers require no blocking whatsoever. The reason why readers do not require any blocking type of synchronization is because we are treating the read-only copy as if it were immutable. Here is how it can be done.
public class Example
{
private readonly List<object> m_Official;
private volatile List<object> m_Readonly;
public Example()
{
m_Official = new List<object>();
m_Readonly = m_Official;
}
public void Update()
{
lock (m_Official)
{
// Modify the official copy here.
m_Official.Add(...);
m_Official.Remove(...);
// Now clone the official copy.
var clone = new List<object>(m_Official);
// And finally swap out the read-only copy reference.
m_Readonly = clone;
}
}
public object Read(int index)
{
// It is safe to access the read-only copy here because it is immutable.
// m_Readonly must be marked as volatile for this to work correctly.
return m_Readonly[index];
}
}
The code above would not satisfy your requirements precisely because readers never block...ever. Which means they will still be taking place while writers are updating the official list. But, in a lot of scenarios this winds up being acceptable.
First of all, sorry about the title -- I couldn't figure out one that was short and clear enough.
Here's the issue: I have a list List<MyClass> list to which I always add newly-created instances of MyClass, like this: list.Add(new MyClass()). I don't add elements any other way.
However, then I iterate over the list with foreach and find that there are some null entries. That is, the following code:
foreach (MyClass entry in list)
if (entry == null)
throw new Exception("null entry!");
will sometimes throw an exception.
I should point out that the list.Add(new MyClass()) are performed from different threads running concurrently. The only thing I can think of to account for the null entries is the concurrent accesses. List<> isn't thread-safe, after all. Though I still find it strange that it ends up containing null entries, instead of just not offering any guarantees on ordering.
Can you think of any other reason?
Also, I don't care in which order the items are added, and I don't want the calling threads to block waiting to add their items. If synchronization is truly the issue, can you recommend a simple way to call the Add method asynchronously, i.e., create a delegate that takes care of that while my thread keeps running its code? I know I can create a delegate for Add and call BeginInvoke on it. Does that seem appropriate?
Thanks.
EDIT: A simple solution based on Kevin's suggestion:
public class AsynchronousList<T> : List<T> {
private AddDelegate addDelegate;
public delegate void AddDelegate(T item);
public AsynchronousList() {
addDelegate = new AddDelegate(this.AddBlocking);
}
public void AddAsynchronous(T item) {
addDelegate.BeginInvoke(item, null, null);
}
private void AddBlocking(T item) {
lock (this) {
Add(item);
}
}
}
I only need to control Add operations and I just need this for debugging (it won't be in the final product), so I just wanted a quick fix.
Thanks everyone for your answers.
List<T> can only support multiple readers concurrently. If you are going to use multiple threads to add to the list, you'll need to lock the object first. There is really no way around this, because without a lock you can still have someone reading from the list while another thread updates it (or multiple objects trying to update it concurrently also).
http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx
Your best bet probably is to encapsulate the list in another object, and have that object handle the locking and unlocking actions on the internal list. That way you could make your new object's "Add" method asynchronous and let the calling objects go on their merry way. Any time you read from it though you'll most likely still have to wait on any other objects finishing their updates though.
The only thing I can think of to account for the null entries is the concurrent accesses. List<> isn't thread-safe, after all.
That's basically it. We are specifically told it's not thread-safe, so we shouldn't be surprised that concurrent access results in contract-breaking behaviour.
As to why this specific problem occurs, we can but speculate, since List<>'s private implementation is, well, private (I know we have Reflector and Shared Source - but in principle it is private). Suppose the implementation involves an array and a 'last populated index'. Suppose also that 'Add an item' looks like this:
Ensure the array is big enough for another item
last populated index <- last populated index + 1
array[last populated index] = incoming item
Now suppose there are two threads calling Add. If the interleaved sequence of operations ends up like this:
Thread A : last populated index <- last populated index + 1
Thread B : last populated index <- last populated index + 1
Thread A : array[last populated index] = incoming item
Thread B : array[last populated index] = incoming item
then not only will there be a null in the array, but also the item that thread A was trying to add won't be in the array at all!
Now, I don't know for sure how List<> does its stuff internally. I have half a memory that it is with an ArrayList, which internally uses this scheme; but in fact it doesn't matter. I suspect that any list mechanism that expects to be run non-concurrently can be made to break with concurrent access and a sufficiently 'unlucky' interleaving of operations. If we want thread-safety from an API that doesn't provide it, we have to do some work ourselves - or at least, we shouldn't be surprised if the API sometimes breaks its when we don't.
For your requirement of
I don't want the calling threads to block waiting to add their item
my first thought is a Multiple-Producer-Single-Consumer queue, wherein the threads wanting to add items are the producers, which dispatch items to the queue async, and there is a single consumer which takes items off the queue and adds them to the list with appropriate locking. My second thought is that this feels as if it would be heavier than this situation warrants, so I'll let it mull for a bit.
If you're using .NET Framework 4, you might check out the new Concurrent Collections. When it comes to threading, it's better not to try to be clever, as it's extremely easy to get it wrong. Synchronization can impact performance, but the effects of getting threading wrong can also result in strange, infrequent errors that are a royal pain to track down.
If you're still using Framework 2 or 3.5 for this project, I recommend simply wrapping your calls to the list in a lock statement. If you're concerned about performance of Add (are you performing some long-running operation using the list somewhere else?) then you can always make a copy of the list within a lock and use that copy for your long-running operation outside the lock. Simply blocking on the Adds themselves shouldn't be a performance issue, unless you have a very large number of threads. If that's the case, you can try the Multiple-Producer-Single-Consumer queue that AakashM recommended.