Locking on a non-thread-safe object, is it acceptable practice? - c#

I got some grief about this in a comment I posted the other day, so I wanted to post the question in an attempt for people to tell me that I'm crazy, which I'll accept, or tell me that I may be right, which I'll also gladly accept. I may also accept anything in between.
Let's say you have a non-thread-safe object type such as Dictionary<int, string>. For the sake of argument, I know you can also use ConcurrentDictionary<int, string> which is thread safe, but I want to talk about the general practice around non-thread-safe objects in a multi-threaded environment.
Consider the following example:
private static readonly Dictionary<int, string> SomeDictionary = new Dictionary<int, string>();
private static readonly object LockObj = new object();
public static string GetById(int id)
{
string result;
/** Lock Bypass **/
if (SomeDictionary.TryGetValue(id, out result)
{
return result;
}
lock (LockObj)
{
if (SomeDictionary.TryGetValue(id, out result)
{
return result;
}
SomeDictionary.Add(id, result = GetSomeString());
}
return result;
}
The locking pattern is called Double-Checked Locking, since the lock is actively bypassed if the dictionary is already initialized with that id. The "Add" method of the dictionary is called within the lock because we only want to call the method once, because it will throw an exception if you try to add an item with the same key.
It was my understanding that this locking pattern essentially synchronizes the way that Dictionary is handled, which allows it to be thread safe. But, I got some negative comments about how that doesn't actually make it thread safe.
So, my question is, is this locking pattern acceptable for non-thread-safe objects in a multi-threaded environment? If not, what would be a better pattern to use? (assuming there's not an identical C# type that is thread-safe)

No, this is not safe. The TryGetValue method simply isn't thread-safe, so you shouldn't use it when the object is shared between multiple threads without locking. The double-checked locking pattern involves just testing a reference - which while it isn't guaranteed to give an up to date result, won't cause any other problems. Compare that with TryGetValue which could do anything (e.g. throw an exception, corrupt the internal data structure) if called at the same time as, say, Add.
Personally I'd just use a lock, but you could potentially use ReaderWriterLockSlim. (In most cases a simply lock will be more efficient - but it depends on how long the reading and writing operations take, and what the contentions are like.)

This isn't safe, because a second thread can potentially read the value from SomeDictionary while the dictionary is in an inconsistent state.
Consider the following scenario:
Thread A attempts to get id 3. It doesn't exist, so it acquires the lock and calls Add, but is interrupted partway through the method.
Thread B attempts to get id 3. The call to Add has gotten far enough that the method returns (or attempts to return) true.
Now a variety of bad things could happen. It's possible that Thread B sees the first TryGetValue (outside the lock) return true, but the value that's returned is nonsensical because the real value hasn't actually been stored yet. The other possibility is that the Dictionary implementation realizes that it's in an inconsistent state and throws InvalidOperationException. Or it might not throw, it might just continue with a corrupted internal state. Either way, bad mojo.

Just remove the first TryGetValue and you'll be fine.
/** Lock Bypass **/
if (SomeDictionary.TryGetValue(id, out result)
{
return result;
}
Do not use ReaderWriterLock or ReaderWriterLockSlim unless you are doing less than 20% writes AND the workload within the lock is significant enough that parallel reads will matter. As an example, the following demonstrates that a simple lock() statement will out-perform the use of either reader/writer locks when the read/write operation is simple.
internal class MutexOrRWLock
{
private const int LIMIT = 1000000;
private const int WRITE = 100;//write once every n reads
private static void Main()
{
if (Environment.ProcessorCount < 8)
throw new ApplicationException("You must have at least 8 cores.");
Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(255); // pin the process to first 8 CPUs
Console.WriteLine("ReaderWriterLock");
new RWLockTest().Test(3);
Console.WriteLine("ReaderWriterLockSlim");
new RWSlimTest().Test(3);
Console.WriteLine("Mutex");
new MutexTest().Test(3);
}
private class RWLockTest : MutexTest
{
private readonly ReaderWriterLock _lock1 = new ReaderWriterLock();
protected override void BeginRead() { _lock1.AcquireReaderLock(-1); }
protected override void EndRead() { _lock1.ReleaseReaderLock(); }
protected override void BeginWrite() { _lock1.AcquireWriterLock(-1); }
protected override void EndWrite() { _lock1.ReleaseWriterLock(); }
}
private class RWSlimTest : MutexTest
{
private readonly ReaderWriterLockSlim _lock1 = new ReaderWriterLockSlim();
protected override void BeginRead() { _lock1.EnterReadLock(); }
protected override void EndRead() { _lock1.ExitReadLock(); }
protected override void BeginWrite() { _lock1.EnterWriteLock(); }
protected override void EndWrite() { _lock1.ExitWriteLock(); }
}
private class MutexTest
{
private readonly ManualResetEvent start = new ManualResetEvent(false);
private readonly Dictionary<int, int> _data = new Dictionary<int, int>();
public void Test(int count)
{
for (int i = 0; i < count; i++)
{
_data.Clear();
for (int val = 0; val < LIMIT; val += 3)
_data[val] = val;
start.Reset();
Thread[] threads = new Thread[8];
for (int ti = 0; ti < 8; ti++)
(threads[ti] = new Thread(Work)).Start();
Thread.Sleep(1000);
Stopwatch sw = new Stopwatch();
sw.Start();
start.Set();
foreach (Thread t in threads)
t.Join();
sw.Stop();
Console.WriteLine("Completed: {0}", sw.ElapsedMilliseconds);
}
}
protected virtual void BeginRead() { Monitor.Enter(this); }
protected virtual void EndRead() { Monitor.Exit(this); }
protected virtual void BeginWrite() { Monitor.Enter(this); }
protected virtual void EndWrite() { Monitor.Exit(this); }
private void Work()
{
int val;
Random r = new Random();
start.WaitOne();
for (int i = 0; i < LIMIT; i++)
{
if (i % WRITE == 0)
{
BeginWrite();
_data[r.Next(LIMIT)] = i;
EndWrite();
}
else
{
BeginRead();
_data.TryGetValue(i, out val);
EndRead();
}
}
}
}
}
The preceeding program outputs the following results on my PC:
ReaderWriterLock
Completed: 2412
Completed: 2385
Completed: 2422
ReaderWriterLockSlim
Completed: 1374
Completed: 1397
Completed: 1491
Mutex
Completed: 763
Completed: 750
Completed: 758

Related

Performance of ConcurrentBag, many reads, rare modifications

I'm trying to build a model where there will me multiple reads of an entire collection and rare additions and modifications to it.
I thought I might use the ConcurrentBag in .NET as I've read the documentation and it's supposed to be good for concurrent reads and writes.
The code would look like this:
public class Cache
{
ConcurrentBag<string> cache = new ConcurrentBag<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
return cache.ToList();
}
// this method gets rarely called
public void Add(string newEntry)
{
// add to concurrentBag
}
public void Remove(string entryToRemove)
{
// remove from concurrent bag
}
}
However, I've decompiled the ConcurrentBag class and on theGetEnumerator there's always a lock taken, which means any call to GetAllEntries will lock the entire collection and it will not perform.
I'm thinking to get around this and code it in this manner instead, using a list.
public class Cache
{
private object guard = new object();
IList<string> cache = new List<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
var currentCache = cache;
return currentCache;
}
// this method gets rarely called
public void Add(string newEntry)
{
lock (guard)
{
cache.Add(newEntry);
}
}
public void Remove(string entryToRemove)
{
lock (guard)
{
cache.Remove(entryToRemove);
}
}
}
Since the Add and Remove are rarely called I don't care too much about locking the access to the list there. On Get I might get a stale version of the list, but again I don't care, it will be fine for the next request.
Is the second implementation a good way to go?
EDIT
I've run a quick performance test and the results are the following:
Setup: populated the in memory collection with 10000 strings.
Action: GetAllEntries concurrently 50000 times.
Result:
00:00:35.2393871 to finish operation using ConcurrentBag (first implementation)
00:00:00.0036959 to finish operation using normal list (second implementation)
Code below:
class Program
{
static void Main(string[] args)
{
// warmup caches and stopwatch
var cacheWitBag = new CacheWithBag();
var cacheWitList = new CacheWithList();
cacheWitBag.Add("abc");
cacheWitBag.GetAllEntries();
cacheWitList.Add("abc");
cacheWitList.GetAllEntries();
var sw = new Stopwatch();
// warmup stowtach as well
sw.Start();
// initialize caches (rare writes so no real reason to measure here
for (int i =0; i < 50000; i++)
{
cacheWitBag.Add(new Guid().ToString());
cacheWitList.Add(new Guid().ToString());
}
sw.Stop();
// measure
var program = new Program();
sw.Start();
program.Run(cacheWitBag).Wait();
sw.Stop();
Console.WriteLine(sw.Elapsed);
sw.Restart();
program.Run2(cacheWitList).Wait();
sw.Stop();
Console.WriteLine(sw.Elapsed);
}
public async Task Run(CacheWithBag cache1)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < 10000; i++)
{
tasks.Add(Task.Run(() => cache1.GetAllEntries()));
}
await Task.WhenAll(tasks);
}
public async Task Run2(CacheWithList cache)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < 10000; i++)
{
tasks.Add(Task.Run(() => cache.GetAllEntries()));
}
await Task.WhenAll(tasks);
}
public class CacheWithBag
{
ConcurrentBag<string> cache = new ConcurrentBag<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
return cache.ToList();
}
// this method gets rarely called
public void Add(string newEntry)
{
cache.Add(newEntry);
}
}
public class CacheWithList
{
private object guard = new object();
IList<string> cache = new List<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
var currentCache = cache;
return currentCache;
}
// this method gets rarely called
public void Add(string newEntry)
{
lock (guard)
{
cache.Add(newEntry);
}
}
public void Remove(string entryToRemove)
{
lock (guard)
{
cache.Remove(entryToRemove);
}
}
}
}
}
To improve on InBetween's solution:
class Cache
{
ImmutableHashSet<string> cache = ImmutableHashSet.Create<string>();
public IEnumerable<string> GetAllEntries()
{
return cache;
}
public void Add(string newEntry)
{
ImmutableInterlocked.Update(ref cache, (set,item) => set.Add(item), newEntry);
}
public void Remove(string entryToRemove)
{
ImmutableInterlocked.Update(ref cache, (set,item) => set.Remove(item), newEntry);
}
}
This performs only atomic operations (no locking) and uses the .NET Immutable types.
In your current scenario, where Add and Remove are rarely called, I'd consider the following approach:
public class Cache
{
private object guard = new object();
var cache = new SomeImmutableCollection<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
return cache;
}
// this method gets rarely called
public void Add(string newEntry)
{
lock (guard)
{
cache = cache.Add(newEntry);
}
}
public void Remove(string entryToRemove)
{
lock (guard)
{
cache = cache.Remove(entryToRemove);
}
}
}
The fundamental change here is that cache now is an immutable collection, which means it can't change....ever. So concurrency problems with the collection itself simply disappear, something that can't change is inherently thread safe.
Also, depending on how rare calls to Add and Remove are you can even consider removing the lock in both of them because all its doing now is avoiding a race between Add and Remove and a potential loss of a cache update. If that scenario is very very improbable you could get away with it. That said, I very much doubt the few nanoseconds an uncontended lock takes is a relevant factor here to actually consider this ;)
SomeImmutableCollection can be any of the collections found in System.Collections.Immutable that better suit your needs.
Instead of a 'lock' on a guard object to protect a simple container you should consider the 'ReaderWriterLockSlim' which is optimized and very performant for the read/write scenario : multiple readers are allowed at same time but only one writer is allowed and blocks other readers/writers. It is very useful in your scenario where you read a lot but write only few.
Please note you can be a reader and then, for some reason, decide to become a writer (upgrade the slim lock) in your "reading" code.

WaitHandle.WaitAny that allows threads to enter orderly

I have a fixed number of "browsers", each of which is not thread safe so it must be used on a single thread. On the other hand, I have a long list of threads waiting to use these browsers. What I'm currently doing is have an AutoResetEvent array:
public readonly AutoResetEvent[] WaitHandles;
And initialize them like this:
WaitHandles = Enumerable.Range(0, Browsers.Count).Select(_ => new AutoResetEvent(true)).ToArray();
So I have one AutoResetEvent per browser, which allows me to retrieve a particular browser index for each thread:
public Context WaitForBrowser(int i)
{
System.Diagnostics.Debug.WriteLine($">>> WILL WAIT: {i}");
var index = WaitHandle.WaitAny(WaitHandles);
System.Diagnostics.Debug.WriteLine($">>> ENTERED: {i}");
return new Context(Browsers[index], WaitHandles[index]);
}
The i here is just the index of the thread waiting, since these threads are on a list and have a particular order. I'm just passing this for debugging purposes. Context is a disposable that then calls Set on the wait handle when disposed.
When I look at my Output I see all my ">>> WILL WAIT: {i}" messages are on the right order, since the calls to WaitForBrowser are made sequentially, but my ">>> ENTERED: {i}" messages are on random order (except for the first few), so they're not entering on the same order they arrive at the var index = WaitHandle.WaitAny(WaitHandler); line.
So my question is, is there any way to modify this so that threads enter on the same order the WaitForBrowser method is called (such that ">>> ENTERED: {i}" messages are also ordered)?
Since there doesn't seem to be an out-of-the-box solution, I ended up using a modified version of this solution:
public class SemaphoreQueueItem<T> : IDisposable
{
private bool Disposed;
private readonly EventWaitHandle WaitHandle;
public readonly T Resource;
public SemaphoreQueueItem(EventWaitHandle waitHandle, T resource)
{
WaitHandle = waitHandle;
Resource = resource;
}
public void Dispose()
{
if (!Disposed)
{
Disposed = true;
WaitHandle.Set();
}
}
}
public class SemaphoreQueue<T> : IDisposable
{
private readonly T[] Resources;
private readonly AutoResetEvent[] WaitHandles;
private bool Disposed;
private ConcurrentQueue<TaskCompletionSource<SemaphoreQueueItem<T>>> Queue = new ConcurrentQueue<TaskCompletionSource<SemaphoreQueueItem<T>>>();
public SemaphoreQueue(T[] resources)
{
Resources = resources;
WaitHandles = Enumerable.Range(0, resources.Length).Select(_ => new AutoResetEvent(true)).ToArray();
}
public SemaphoreQueueItem<T> Wait(CancellationToken cancellationToken)
{
return WaitAsync(cancellationToken).Result;
}
public Task<SemaphoreQueueItem<T>> WaitAsync(CancellationToken cancellationToken)
{
var tcs = new TaskCompletionSource<SemaphoreQueueItem<T>>();
Queue.Enqueue(tcs);
Task.Run(() => WaitHandle.WaitAny(WaitHandles.Concat(new[] { cancellationToken.WaitHandle }).ToArray())).ContinueWith(task =>
{
if (Queue.TryDequeue(out var popped))
{
var index = task.Result;
if (cancellationToken.IsCancellationRequested)
popped.SetResult(null);
else
popped.SetResult(new SemaphoreQueueItem<T>(WaitHandles[index], Resources[index]));
}
});
return tcs.Task;
}
public void Dispose()
{
if (!Disposed)
{
foreach (var handle in WaitHandles)
handle.Dispose();
Disposed = true;
}
}
}
Have you considered using Semaphore instead of array of AutoResetEvent ?
Problem with order of waiting threads (for semaphore) was discussed here:
Guaranteed semaphore order?

What is the most performant way to make the results of a cached computation thread-safe?

(Apologies if this was answered elsewhere; it seems like it would be a common problem, but it turns out to be hard to search for since terms like "threading" and "cache" produce overwhelming results.)
I have an expensive computation whose result is accessed frequently but changes infrequently. Thus, I cache the resulting value. Here's some c# pseudocode of what I mean:
int? _cachedResult = null;
int GetComputationResult()
{
if(_cachedResult == null)
{
// Do the expensive computation.
_cachedResult = /* Result of expensive computation. */;
}
return _cachedResult.Value;
}
Elsewhere in my code, I will occasionally set _cachedResult back to null because the input to the computation has changed and thus the cached result is no longer valid and needs to be re-computed. (Which means I can't use Lazy<T> since Lazy<T> doesn't support being reset.)
This works fine for single-threaded scenarios, but of course it's not at all thread-safe. So my question is: What is the most performant way to make GetComputationResult thread-safe?
Obviously I could just put the whole thing in a lock() block, but I suspect there might be a better way? (Something that would do an atomic check to see if the result needs to be recomputed and only lock if it does?)
Thanks a lot!
You can use the double-checked locking pattern:
// Thread-safe (uses double-checked locking pattern for performance)
public class Memoized<T>
{
Func<T> _compute;
volatile bool _cached;
volatile bool _startedCaching;
volatile StrongBox<T> _cachedResult; // Need reference type
object _cacheSyncRoot = new object();
public Memoized(Func<T> compute)
{
_compute = compute;
}
public T Value {
get {
if (_cached) // Fast path
return _cachedResult.Value;
lock (_cacheSyncRoot)
{
if (!_cached)
{
_startedCaching = true;
_cachedResult = new StrongBox<T>(_compute());
_cached = true;
}
}
return _cachedResult.Value;
}
}
public void Invalidate()
{
if (!_startedCaching)
{
// Fast path: already invalidated
Thread.MemoryBarrier(); // need to release
if (!_startedCaching)
return;
}
lock (_cacheSyncRoot)
_cached = _startedCaching = false;
}
}
This particular implementation matches your description of what it should do in corner cases: If the cache has been invalidated, the value should only be computed once, by a single thread, and other threads should wait. However, if the cache is invalidated concurrently with the cached value being accessed, the stale cached value may be returned.
perhaps this will provide some food for thought:).
Generic class.
The class can compute data asynchronously or synchronously.
Allows fast reads thanks to the spinlock.
Does not perform heavy stuff inside the spinlock, just returning Task and if necessary, creating and starting Task on default TaskScheduler, to avoid inlining.
Task with Spinlock is pretty powerful combination, that can solve some problems in lock-free way.
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Example
{
class OftenReadSometimesUpdate<T>
{
private Task<T> result_task = null;
private SpinLock spin_lock = new SpinLock(false);
private TResult LockedFunc<TResult>(Func<TResult> locked_func)
{
TResult t_result = default(TResult);
bool gotLock = false;
if (locked_func == null) return t_result;
try
{
spin_lock.Enter(ref gotLock);
t_result = locked_func();
}
finally
{
if (gotLock) spin_lock.Exit();
gotLock = false;
}
return t_result;
}
public Task<T> GetComputationAsync()
{
return
LockedFunc(GetComputationTaskLocked)
;
}
public T GetComputationResult()
{
return
LockedFunc(GetComputationTaskLocked)
.Result
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResult()
{
return
this
.LockedFunc(InvalidateComputationResultLocked)
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResultLocked()
{
result_task = null;
return this;
}
private Task<T> GetComputationTaskLocked()
{
if (result_task == null)
{
result_task = new Task<T>(HeavyComputation);
result_task.Start(TaskScheduler.Default);
}
return result_task;
}
protected virtual T HeavyComputation()
{
//a heavy computation
return default(T);//return some result of computation
}
}
}
You could simply reassign the Lazy<T> to achieve a reset:
Lazy<int> lazyResult = new Lazy<int>(GetComputationResult);
public int Result { get { return lazyResult.Value; } }
public void Reset()
{
lazyResult = new Lazy<int>(GetComputationResult);
}

producer-consumer with a resource

I'm trying to implement the producer/consumer pattern with a set of resources, so each thread has one resource associated with it. For example, I may have a queue of tasks where each task requires a StreamWriter to write its result. Each task also has to have parameters passed to it.
I started with Joseph Albahari's implementation (see below for my modified version).
I replaced the queue of Action with a queue of Action<T> where T is the resource, and pass the resource associated with the thread to the Action. But, this leaves me with the problem of how to pass parameters to the Action. Obviously, the Action must be replaced with a delegate but this leaves the problem of how to pass parameters when tasks are enqueued (from outside the ProducerConsumerQueue class). Any ideas on how to do this?
class ProducerConsumerQueue<T>
{
readonly object _locker = new object();
Thread[] _workers;
Queue<Action<T>> _itemQ = new Queue<Action<T>>();
public ProducerConsumerQueue(T[] resources)
{
_workers = new Thread[resources.Length];
// Create and start a separate thread for each worker
for (int i = 0; i < resources.Length; i++)
{
Thread thread = new Thread(() => Consume(resources[i]));
thread.SetApartmentState(ApartmentState.STA);
_workers[i] = thread;
_workers[i].Start();
}
}
public void Shutdown(bool waitForWorkers)
{
// Enqueue one null item per worker to make each exit.
foreach (Thread worker in _workers)
EnqueueItem(null);
// Wait for workers to finish
if (waitForWorkers)
foreach (Thread worker in _workers)
worker.Join();
}
public void EnqueueItem(Action<T> item)
{
lock (_locker)
{
_itemQ.Enqueue(item); // We must pulse because we're
Monitor.Pulse(_locker); // changing a blocking condition.
}
}
void Consume(T parameter)
{
while (true) // Keep consuming until
{ // told otherwise.
Action<T> item;
lock (_locker)
{
while (_itemQ.Count == 0) Monitor.Wait(_locker);
item = _itemQ.Dequeue();
}
if (item == null) return; // This signals our exit.
item(parameter); // Execute item.
}
}
}
The type T in ProducerConsumerQueue<T> doesn't have to be your resource it can be a composite type that contains your resource. With .NET4 the easiest way to do this is with Tuple<StreamWriter, YourParameterType>. The produce/consumer queue just eats and spits out T so in your Action<T> you can just use properties to get the resource and the parameter. If you are using Tuple you would use Item1 to get the resource and Item2 to get the parameter.
If you are not use .NET4, the process is similar but you just create your own class:
public class WorkItem<T>
{
private StreamWriter resource;
private T parameter;
public WorkItem(StreamWriter resource, T parameter)
{
this.resource = resource;
this.parameter = parameter;
}
public StreamWriter Resource { get { return resource; } }
public T Parameter { get { return parameter; } }
}
In fact, making it generic may be overdesigning for your situation. You can just define T to be the type you want it to be.
Also, for reference, there are new ways to do multi-threading included in .NET4 that may applicable to your use case such as concurrent queues and the Parallel Task Library. They can also be combined with traditional approaches such as semaphores.
Edit:
Continuing with this approach, here is a small sample class that demonstrates using:
a semaphore to control access to a limited resource
a concurrent queue to manage that resource safely between threads
task management using the Task Parallel Library
Here is the Processor class:
public class Processor
{
private const int count = 3;
private ConcurrentQueue<StreamWriter> queue = new ConcurrentQueue<StreamWriter>();
private Semaphore semaphore = new Semaphore(count, count);
public Processor()
{
// Populate the resource queue.
for (int i = 0; i < count; i++) queue.Enqueue(new StreamWriter("sample" + i));
}
public void Process(int parameter)
{
// Wait for one of our resources to become free.
semaphore.WaitOne();
StreamWriter resource;
queue.TryDequeue(out resource);
// Dispatch the work to a task.
Task.Factory.StartNew(() => Process(resource, parameter));
}
private Random random = new Random();
private void Process(StreamWriter resource, int parameter)
{
// Do work in background with resource.
Thread.Sleep(random.Next(10) * 100);
resource.WriteLine("Parameter = {0}", parameter);
queue.Enqueue(resource);
semaphore.Release();
}
}
and now we can use the class like this:
var processor = new Processor();
for (int i = 0; i < 10; i++)
processor.Process(i);
and no more than three tasks will be scheduled at the same time, each with their own StreamWriter resource which is recycled.

Better way to handle read-only access to state with another thread?

This is a design question, not a bug fix problem.
The situation is this. I have a lot of collections and objects contained in one class. Their contents are only changed by a single message handler thread. There is one other thread which is doing rendering. Each frame it iterates through some of these collections and draws to the screen based on the value of these objects. It does not alter the objects in any way, it is just reading their values.
Now when the rendering is being done, if any of the collections are altered, my foreach loops in the rendering method fail. How should I make this thread safe? Edit: So I have to lock the collections outside each foreach loop I run on them. This works, but it seems like a lot of repetitive code to solve this problem.
As a short, contrived example:
class State
{
public object LockObjects;
public List<object> Objects;
// Called by message handler thread
void HandleMessage()
{
lock (LockObjects)
{
Objects.Add(new object());
}
}
}
class Renderer
{
State m_state;
// Called by rendering thread
void Render()
{
lock (m_state.LockObjects)
{
foreach (var obj in m_state.Objects)
{
DrawObject(obj);
}
}
}
}
This is all well and good, but I'd rather not put locks on all my state collections if there's a better way. Is this "the right" way to do it or is there a better way?
The better way is to use begin/end methods and separated lists for your both threads and synchronization using auto events for example. It will be lock-free to your message handler thread and enables you to have a lot of render/message handler threads:
class State : IDisposable
{
private List<object> _objects;
private ReaderWriterLockSlim _locker;
private object _cacheLocker;
private List<object> _objectsCache;
private Thread _synchronizeThread;
private AutoResetEvent _synchronizationEvent;
private bool _abortThreadToken;
public State()
{
_objects = new List<object>();
_objectsCache = new List<object>();
_cacheLocker = new object();
_locker = new ReaderWriterLockSlim();
_synchronizationEvent = new AutoResetEvent(false);
_abortThreadToken = false;
_synchronizeThread = new Thread(Synchronize);
_synchronizeThread.Start();
}
private void Synchronize()
{
while (!_abortThreadToken)
{
_synchronizationEvent.WaitOne();
int objectsCacheCount;
lock (_cacheLocker)
{
objectsCacheCount = _objectsCache.Count;
}
if (objectsCacheCount > 0)
{
_locker.EnterWriteLock();
lock (_cacheLocker)
{
_objects.AddRange(_objectsCache);
_objectsCache.Clear();
}
_locker.ExitWriteLock();
}
}
}
public IEnumerator<object> GetEnumerator()
{
_locker.EnterReadLock();
foreach (var o in _objects)
{
yield return o;
}
_locker.ExitReadLock();
}
// Called by message handler thread
public void HandleMessage()
{
lock (_cacheLocker)
{
_objectsCache.Add(new object());
}
_synchronizationEvent.Set();
}
public void Dispose()
{
_abortThreadToken = true;
_synchronizationEvent.Set();
}
}
Or (the simpler way) you can use ReaderWriteerLockSlim (Or just locks if you sure you have only one reader) like in the following code:
class State
{
List<object> m_objects = new List<object>();
ReaderWriterLockSlim locker = new ReaderWriterLockSlim();
public IEnumerator<object> GetEnumerator()
{
locker.EnterReadLock();
foreach (var o in Objects)
{
yield return o;
}
locker.ExitReadLock();
}
private List<object> Objects
{
get { return m_objects; }
set { m_objects = value; }
}
// Called by message handler thread
public void HandleMessage()
{
locker.EnterWriteLock();
Objects.Add(new object());
locker.ExitWriteLock();
}
}
Humm... have you tried with a ReaderWriterLockSlim ? Enclose each conllection with one of this, and ensure you start a read or write operation each time you access it.

Categories