WaitHandle.WaitAny that allows threads to enter orderly - c#

I have a fixed number of "browsers", each of which is not thread safe so it must be used on a single thread. On the other hand, I have a long list of threads waiting to use these browsers. What I'm currently doing is have an AutoResetEvent array:
public readonly AutoResetEvent[] WaitHandles;
And initialize them like this:
WaitHandles = Enumerable.Range(0, Browsers.Count).Select(_ => new AutoResetEvent(true)).ToArray();
So I have one AutoResetEvent per browser, which allows me to retrieve a particular browser index for each thread:
public Context WaitForBrowser(int i)
{
System.Diagnostics.Debug.WriteLine($">>> WILL WAIT: {i}");
var index = WaitHandle.WaitAny(WaitHandles);
System.Diagnostics.Debug.WriteLine($">>> ENTERED: {i}");
return new Context(Browsers[index], WaitHandles[index]);
}
The i here is just the index of the thread waiting, since these threads are on a list and have a particular order. I'm just passing this for debugging purposes. Context is a disposable that then calls Set on the wait handle when disposed.
When I look at my Output I see all my ">>> WILL WAIT: {i}" messages are on the right order, since the calls to WaitForBrowser are made sequentially, but my ">>> ENTERED: {i}" messages are on random order (except for the first few), so they're not entering on the same order they arrive at the var index = WaitHandle.WaitAny(WaitHandler); line.
So my question is, is there any way to modify this so that threads enter on the same order the WaitForBrowser method is called (such that ">>> ENTERED: {i}" messages are also ordered)?

Since there doesn't seem to be an out-of-the-box solution, I ended up using a modified version of this solution:
public class SemaphoreQueueItem<T> : IDisposable
{
private bool Disposed;
private readonly EventWaitHandle WaitHandle;
public readonly T Resource;
public SemaphoreQueueItem(EventWaitHandle waitHandle, T resource)
{
WaitHandle = waitHandle;
Resource = resource;
}
public void Dispose()
{
if (!Disposed)
{
Disposed = true;
WaitHandle.Set();
}
}
}
public class SemaphoreQueue<T> : IDisposable
{
private readonly T[] Resources;
private readonly AutoResetEvent[] WaitHandles;
private bool Disposed;
private ConcurrentQueue<TaskCompletionSource<SemaphoreQueueItem<T>>> Queue = new ConcurrentQueue<TaskCompletionSource<SemaphoreQueueItem<T>>>();
public SemaphoreQueue(T[] resources)
{
Resources = resources;
WaitHandles = Enumerable.Range(0, resources.Length).Select(_ => new AutoResetEvent(true)).ToArray();
}
public SemaphoreQueueItem<T> Wait(CancellationToken cancellationToken)
{
return WaitAsync(cancellationToken).Result;
}
public Task<SemaphoreQueueItem<T>> WaitAsync(CancellationToken cancellationToken)
{
var tcs = new TaskCompletionSource<SemaphoreQueueItem<T>>();
Queue.Enqueue(tcs);
Task.Run(() => WaitHandle.WaitAny(WaitHandles.Concat(new[] { cancellationToken.WaitHandle }).ToArray())).ContinueWith(task =>
{
if (Queue.TryDequeue(out var popped))
{
var index = task.Result;
if (cancellationToken.IsCancellationRequested)
popped.SetResult(null);
else
popped.SetResult(new SemaphoreQueueItem<T>(WaitHandles[index], Resources[index]));
}
});
return tcs.Task;
}
public void Dispose()
{
if (!Disposed)
{
foreach (var handle in WaitHandles)
handle.Dispose();
Disposed = true;
}
}
}

Have you considered using Semaphore instead of array of AutoResetEvent ?
Problem with order of waiting threads (for semaphore) was discussed here:
Guaranteed semaphore order?

Related

BlockingCollection where the consumers are also producers

I have a bunch of requests to process, and during the processing of those requests, more "sub-requests" can be generated and added to the same blocking collection. The consumers add sub-requests to the queue.
It's hard to know when to exit the consuming loop: clearly no thread can call BlockingCollection.CompleteAdding as the other threads may add something to the collection. You also cannot exit the consuming loop just because the BlockingCollection is empty as another thread may have just read the final remaining request from the BlockingCollection and will be about to start generating more requests - the Count of the BlockingCollection will then increase from zero again.
My only idea on this so far is to use a Barrier - when all threads reach the Barrier, there can't be anything left in the BlockingCollection and no thread can be generating new requests. Here is my code - is this an acceptable approach? (and please note: this is highly contrived block of code modelling a much more complex situation: no programmer really writes code that processes random strings 😊 )
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Collections.Concurrent;
using System.Threading;
namespace Barrier1
{
class Program
{
private static readonly Random random = new Random();
private static void Main()
{
var bc = new BlockingCollection<string>();
AddRandomStringsToBc(bc, 1000, true);
int nTasks = 4;
var barrier = new Barrier(nTasks);
Action a = () => DoSomething(bc, barrier);
var actions = Enumerable.Range(0, nTasks).Select(x => a).ToArray();
Parallel.Invoke(actions);
}
private static IEnumerable<char> GetC(bool includeA)
{
var startChar = includeA ? 'A' : 'B';
var add = includeA ? 24 : 25;
while (true)
{
yield return (char)(startChar + random.Next(add));
}
}
private static void DoSomething(BlockingCollection<string> bc, Barrier barrier)
{
while (true)
{
if (bc.TryTake(out var str))
{
Console.WriteLine(str);
if (str[0] == 'A')
{
Console.WriteLine("Adding more strings...");
AddRandomStringsToBc(bc, 100);
}
}
else
{
// Can't exit the loop here just because there is nothing in the collection.
// A different thread may be just about to call AddRandomStringsToBc:
if (barrier.SignalAndWait(100))
{
break;
}
}
}
}
private static void AddRandomStringsToBc(BlockingCollection<string> bc, int n, bool startWithA = false, bool sleep = false)
{
var collection = Enumerable.Range(0, n).Select(x => string.Join("", GetC(startWithA).Take(5)));
foreach (var c in collection)
{
bc.Add(c);
}
}
}
}
Here is a collection similar to the BlockingCollection<T>, with the difference that it completes automatically instead of relying on manually calling the CompleteAdding method. The condition for the automatic completion is that the collection is empty, and all the consumers are in a waiting state.
The implementation is based on your clever idea of using a Barrier as a mechanism for checking the auto-complete condition. It's not perfect because it relies on pooling, which is taking place when the collection becomes empty and has some consumers that are still active. On the other hand it allows to exploit all the existing functionality of the BlockingCollection<T> class, instead of rewriting it from scratch:
/// <summary>
/// A blocking collection that completes automatically when it's empty, and all
/// consuming enumerables are in a waiting state.
/// </summary>
public class AutoCompleteBlockingCollection<T> : IEnumerable<T>, IDisposable
{
private readonly BlockingCollection<T> _queue;
private readonly Barrier _barrier;
private volatile bool _autoCompleteStarted;
private volatile int _intervalMilliseconds = 500;
public AutoCompleteBlockingCollection(int boundedCapacity = -1)
{
_queue = boundedCapacity == -1 ? new() : new(boundedCapacity);
_barrier = new(0, _ => _queue.CompleteAdding());
}
public int Count => _queue.Count;
public int BoundedCapacity => _queue.BoundedCapacity;
public bool IsAddingCompleted => _queue.IsAddingCompleted;
public bool IsCompleted => _queue.IsCompleted;
/// <summary>
/// Begin observing the condition for automatic completion.
/// </summary>
public void BeginObservingAutoComplete() => _autoCompleteStarted = true;
/// <summary>
/// Gets or sets how frequently to check for the auto-complete condition.
/// </summary>
public TimeSpan CheckAutoCompleteInterval
{
get { return TimeSpan.FromMilliseconds(_intervalMilliseconds); }
set
{
int milliseconds = checked((int)value.TotalMilliseconds);
if (milliseconds < 0) throw new ArgumentOutOfRangeException();
_intervalMilliseconds = milliseconds;
}
}
public void Add(T item, CancellationToken cancellationToken = default)
=> _queue.Add(item, cancellationToken);
public bool TryAdd(T item) => _queue.TryAdd(item);
public IEnumerable<T> GetConsumingEnumerable(
CancellationToken cancellationToken = default)
{
_barrier.AddParticipant();
try
{
while (true)
{
if (!_autoCompleteStarted)
{
if (_queue.TryTake(out var item, _intervalMilliseconds,
cancellationToken))
yield return item;
}
else
{
if (_queue.TryTake(out var item, 0, cancellationToken))
yield return item;
else if (_barrier.SignalAndWait(_intervalMilliseconds,
cancellationToken))
break;
}
}
}
finally { _barrier.RemoveParticipant(); }
}
IEnumerator<T> IEnumerable<T>.GetEnumerator()
=> ((IEnumerable<T>)_queue).GetEnumerator();
IEnumerator IEnumerable.GetEnumerator()
=> ((IEnumerable<T>)_queue).GetEnumerator();
public void Dispose() { _barrier.Dispose(); _queue.Dispose(); }
}
The BeginObservingAutoComplete method should be called after adding the initial items in the collection. Before calling this method, the auto-complete condition is not checked.
The CheckAutoCompleteInterval is 500 milliseconds by default, and it can be configured at any time.
The Take and TryTake methods are missing on purpose. The collection is intended to be consumed via the GetConsumingEnumerable method. This way the collection keeps track of the currently subscribed consumers, in order to know when to auto-complete. Consumers can be added and removed at any time. A consumer can be removed by exiting the foreach loop, either by break/return etc, or by an exception.
Usage example:
private static void Main()
{
var bc = new AutoCompleteBlockingCollection<string>();
AddRandomStringsToBc(bc, 1000, true);
bc.BeginObservingAutoComplete();
Action action = () => DoSomething(bc);
var actions = Enumerable.Repeat(action, 4).ToArray();
Parallel.Invoke(actions);
}
private static void DoSomething(AutoCompleteBlockingCollection<string> bc)
{
foreach (var str in bc.GetConsumingEnumerable())
{
Console.WriteLine(str);
if (str[0] == 'A')
{
Console.WriteLine("Adding more strings...");
AddRandomStringsToBc(bc, 100);
}
}
}
The collection is thread-safe, with the exception of the Dispose method.

What should be locked for the same string? [duplicate]

I'm attempting to figure out an issue that has been raised with my ImageProcessor library here where I am getting intermittent file access errors when adding items to the cache.
System.IO.IOException: The process cannot access the file 'D:\home\site\wwwroot\app_data\cache\0\6\5\f\2\7\065f27fc2c8e843443d210a1e84d1ea28bbab6c4.webp' because it is being used by another process.
I wrote a class designed to perform an asynchronous lock based upon a key generated by a hashed url but it seems I have missed something in the implementation.
My locking class
public sealed class AsyncDuplicateLock
{
/// <summary>
/// The collection of semaphore slims.
/// </summary>
private static readonly ConcurrentDictionary<object, SemaphoreSlim> SemaphoreSlims
= new ConcurrentDictionary<object, SemaphoreSlim>();
/// <summary>
/// Locks against the given key.
/// </summary>
/// <param name="key">
/// The key that identifies the current object.
/// </param>
/// <returns>
/// The disposable <see cref="Task"/>.
/// </returns>
public IDisposable Lock(object key)
{
DisposableScope releaser = new DisposableScope(
key,
s =>
{
SemaphoreSlim locker;
if (SemaphoreSlims.TryRemove(s, out locker))
{
locker.Release();
locker.Dispose();
}
});
SemaphoreSlim semaphore = SemaphoreSlims.GetOrAdd(key, new SemaphoreSlim(1, 1));
semaphore.Wait();
return releaser;
}
/// <summary>
/// Asynchronously locks against the given key.
/// </summary>
/// <param name="key">
/// The key that identifies the current object.
/// </param>
/// <returns>
/// The disposable <see cref="Task"/>.
/// </returns>
public Task<IDisposable> LockAsync(object key)
{
DisposableScope releaser = new DisposableScope(
key,
s =>
{
SemaphoreSlim locker;
if (SemaphoreSlims.TryRemove(s, out locker))
{
locker.Release();
locker.Dispose();
}
});
Task<IDisposable> releaserTask = Task.FromResult(releaser as IDisposable);
SemaphoreSlim semaphore = SemaphoreSlims.GetOrAdd(key, new SemaphoreSlim(1, 1));
Task waitTask = semaphore.WaitAsync();
return waitTask.IsCompleted
? releaserTask
: waitTask.ContinueWith(
(_, r) => (IDisposable)r,
releaser,
CancellationToken.None,
TaskContinuationOptions.ExecuteSynchronously,
TaskScheduler.Default);
}
/// <summary>
/// The disposable scope.
/// </summary>
private sealed class DisposableScope : IDisposable
{
/// <summary>
/// The key
/// </summary>
private readonly object key;
/// <summary>
/// The close scope action.
/// </summary>
private readonly Action<object> closeScopeAction;
/// <summary>
/// Initializes a new instance of the <see cref="DisposableScope"/> class.
/// </summary>
/// <param name="key">
/// The key.
/// </param>
/// <param name="closeScopeAction">
/// The close scope action.
/// </param>
public DisposableScope(object key, Action<object> closeScopeAction)
{
this.key = key;
this.closeScopeAction = closeScopeAction;
}
/// <summary>
/// Disposes the scope.
/// </summary>
public void Dispose()
{
this.closeScopeAction(this.key);
}
}
}
Usage - within a HttpModule
private readonly AsyncDuplicateLock locker = new AsyncDuplicateLock();
using (await this.locker.LockAsync(cachedPath))
{
// Process and save a cached image.
}
Can anyone spot where I have gone wrong? I'm worried that I am misunderstanding something fundamental.
The full source for the library is stored on Github here
As the other answerer noted, the original code is removing the SemaphoreSlim from the ConcurrentDictionary before it releases the semaphore. So, you've got too much semaphore churn going on - they're being removed from the dictionary when they could still be in use (not acquired, but already retrieved from the dictionary).
The problem with this kind of "mapping lock" is that it's difficult to know when the semaphore is no longer necessary. One option is to never dispose the semaphores at all; that's the easy solution, but may not be acceptable in your scenario. Another option - if the semaphores are actually related to object instances and not values (like strings) - is to attach them using ephemerons; however, I believe this option would also not be acceptable in your scenario.
So, we do it the hard way. :)
There are a few different approaches that would work. I think it makes sense to approach it from a reference-counting perspective (reference-counting each semaphore in the dictionary). Also, we want to make the decrement-count-and-remove operation atomic, so I just use a single lock (making the concurrent dictionary superfluous):
public sealed class AsyncDuplicateLock
{
private sealed class RefCounted<T>
{
public RefCounted(T value)
{
RefCount = 1;
Value = value;
}
public int RefCount { get; set; }
public T Value { get; private set; }
}
private static readonly Dictionary<object, RefCounted<SemaphoreSlim>> SemaphoreSlims
= new Dictionary<object, RefCounted<SemaphoreSlim>>();
private SemaphoreSlim GetOrCreate(object key)
{
RefCounted<SemaphoreSlim> item;
lock (SemaphoreSlims)
{
if (SemaphoreSlims.TryGetValue(key, out item))
{
++item.RefCount;
}
else
{
item = new RefCounted<SemaphoreSlim>(new SemaphoreSlim(1, 1));
SemaphoreSlims[key] = item;
}
}
return item.Value;
}
public IDisposable Lock(object key)
{
GetOrCreate(key).Wait();
return new Releaser { Key = key };
}
public async Task<IDisposable> LockAsync(object key)
{
await GetOrCreate(key).WaitAsync().ConfigureAwait(false);
return new Releaser { Key = key };
}
private sealed class Releaser : IDisposable
{
public object Key { get; set; }
public void Dispose()
{
RefCounted<SemaphoreSlim> item;
lock (SemaphoreSlims)
{
item = SemaphoreSlims[Key];
--item.RefCount;
if (item.RefCount == 0)
SemaphoreSlims.Remove(Key);
}
item.Value.Release();
}
}
}
Here is a KeyedLock class that is less convenient and more error prone, but also less allocatey than Stephen Cleary's AsyncDuplicateLock. It maintains internally a pool of SemaphoreSlims, that can be reused by any key after they are released by the previous key. The capacity of the pool is configurable, and by default is 10.
This class is not allocation-free, because the SemaphoreSlim class allocates memory (quite a lot actually) every time the semaphore cannot be acquired synchronously because of contention.
The lock can be requested both synchronously and asynchronously, and can also be requested with cancellation and timeout. These features are provided by exploiting the existing functionality of the SemaphoreSlim class.
public class KeyedLock<TKey>
{
private readonly Dictionary<TKey, (SemaphoreSlim, int)> _perKey;
private readonly Stack<SemaphoreSlim> _pool;
private readonly int _poolCapacity;
public KeyedLock(IEqualityComparer<TKey> keyComparer = null, int poolCapacity = 10)
{
_perKey = new Dictionary<TKey, (SemaphoreSlim, int)>(keyComparer);
_pool = new Stack<SemaphoreSlim>(poolCapacity);
_poolCapacity = poolCapacity;
}
public async Task<bool> WaitAsync(TKey key, int millisecondsTimeout,
CancellationToken cancellationToken = default)
{
var semaphore = GetSemaphore(key);
bool entered = false;
try
{
entered = await semaphore.WaitAsync(millisecondsTimeout,
cancellationToken).ConfigureAwait(false);
}
finally { if (!entered) ReleaseSemaphore(key, entered: false); }
return entered;
}
public Task WaitAsync(TKey key, CancellationToken cancellationToken = default)
=> WaitAsync(key, Timeout.Infinite, cancellationToken);
public bool Wait(TKey key, int millisecondsTimeout,
CancellationToken cancellationToken = default)
{
var semaphore = GetSemaphore(key);
bool entered = false;
try { entered = semaphore.Wait(millisecondsTimeout, cancellationToken); }
finally { if (!entered) ReleaseSemaphore(key, entered: false); }
return entered;
}
public void Wait(TKey key, CancellationToken cancellationToken = default)
=> Wait(key, Timeout.Infinite, cancellationToken);
public void Release(TKey key) => ReleaseSemaphore(key, entered: true);
private SemaphoreSlim GetSemaphore(TKey key)
{
SemaphoreSlim semaphore;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
int counter;
(semaphore, counter) = entry;
_perKey[key] = (semaphore, ++counter);
}
else
{
lock (_pool) semaphore = _pool.Count > 0 ? _pool.Pop() : null;
if (semaphore == null) semaphore = new SemaphoreSlim(1, 1);
_perKey[key] = (semaphore, 1);
}
}
return semaphore;
}
private void ReleaseSemaphore(TKey key, bool entered)
{
SemaphoreSlim semaphore; int counter;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
(semaphore, counter) = entry;
counter--;
if (counter == 0)
_perKey.Remove(key);
else
_perKey[key] = (semaphore, counter);
}
else
{
throw new InvalidOperationException("Key not found.");
}
}
if (entered) semaphore.Release();
if (counter == 0)
{
Debug.Assert(semaphore.CurrentCount == 1);
lock (_pool) if (_pool.Count < _poolCapacity) _pool.Push(semaphore);
}
}
}
Usage example:
var locker = new KeyedLock<string>();
await locker.WaitAsync("Hello");
try
{
await DoSomethingAsync();
}
finally
{
locker.Release("Hello");
}
The implementation uses tuple deconstruction, that requires at least C# 7.
The KeyedLock class could be easily modified to become a KeyedSemaphore, that would allow more than one concurrent operations per key. It would just need a maximumConcurrencyPerKey parameter in the constructor, that would be stored and passed to the constructor of the SemaphoreSlims.
Note: The SemaphoreSlim class when misused it throws a SemaphoreFullException. This happens when the semaphore is released more times than it has been acquired. The KeyedLock implementation of this answer behaves differently in case of misuse: it throws an InvalidOperationException("Key not found."). This happens because when a key is released as many times as it has been acquired, the associated semaphore is removed from the dictionary. If this implementation ever throw a SemaphoreFullException, it would be an indication of a bug.
I wrote a library called AsyncKeyedLock to fix this common problem. The library currently supports using it with the type object (so you can mix different types together) or using generics to get a more efficient solution. It allows for timeouts, cancellation tokens, and also pooling so as to reduce allocations. Underlying it uses a ConcurrentDictionary and also allows for setting the initial capacity and concurrency for this dictionary.
I have benchmarked this against the other solutions provided here and it is more efficient, in terms of speed, memory usage (allocations) as well as scalability (internally it uses the more scalable ConcurrentDictionary). It's being used in a number of systems in production and used by a number of popular libraries.
The source code is available on GitHub and packaged at NuGet.
The approach here is to basically use the ConcurrentDictionary to store an IDisposable object which has a counter on it and a SemaphoreSlim. Once this counter reaches 0, it is removed from the dictionary and either disposed or returned to the pool (if pooling is used). Monitor is used to lock this object when either the counter is being incremented or decremented.
Usage example:
var locker = new AsyncKeyedLocker<string>(o =>
{
o.PoolSize = 20;
o.PoolInitialFill = 1;
});
string key = "my key";
// asynchronous code
using (await locker.LockAsync(key, cancellationToken))
{
...
}
// synchronous code
using (locker.Lock(key))
{
...
}
Download from NuGet.
For a given key,
Thread 1 calls GetOrAdd and adds a new semaphore and acquires it via Wait
Thread 2 calls GetOrAdd and gets the existing semaphore and blocks on Wait
Thread 1 releases the semaphore, only after having called TryRemove, which removed the semaphore from the dictionary
Thread 2 now acquires the semaphore.
Thread 3 calls GetOrAdd for the same key as thread 1 and 2. Thread 2 is still holding the semaphore, but the semaphore is not in the dictionary, so thread 3 creates a new semaphore and both threads 2 and 3 access the same protected resource.
You need to adjust your logic. The semaphore should only be removed from the dictionary when it has no waiters.
Here is one potential solution, minus the async part:
public sealed class AsyncDuplicateLock
{
private class LockInfo
{
private SemaphoreSlim sem;
private int waiterCount;
public LockInfo()
{
sem = null;
waiterCount = 1;
}
// Lazily create the semaphore
private SemaphoreSlim Semaphore
{
get
{
var s = sem;
if (s == null)
{
s = new SemaphoreSlim(0, 1);
var original = Interlocked.CompareExchange(ref sem, null, s);
// If someone else already created a semaphore, return that one
if (original != null)
return original;
}
return s;
}
}
// Returns true if successful
public bool Enter()
{
if (Interlocked.Increment(ref waiterCount) > 1)
{
Semaphore.Wait();
return true;
}
return false;
}
// Returns true if this lock info is now ready for removal
public bool Exit()
{
if (Interlocked.Decrement(ref waiterCount) <= 0)
return true;
// There was another waiter
Semaphore.Release();
return false;
}
}
private static readonly ConcurrentDictionary<object, LockInfo> activeLocks = new ConcurrentDictionary<object, LockInfo>();
public static IDisposable Lock(object key)
{
// Get the current info or create a new one
var info = activeLocks.AddOrUpdate(key,
(k) => new LockInfo(),
(k, v) => v.Enter() ? v : new LockInfo());
DisposableScope releaser = new DisposableScope(() =>
{
if (info.Exit())
{
// Only remove this exact info, in case another thread has
// already put its own info into the dictionary
((ICollection<KeyValuePair<object, LockInfo>>)activeLocks)
.Remove(new KeyValuePair<object, LockInfo>(key, info));
}
});
return releaser;
}
private sealed class DisposableScope : IDisposable
{
private readonly Action closeScopeAction;
public DisposableScope(Action closeScopeAction)
{
this.closeScopeAction = closeScopeAction;
}
public void Dispose()
{
this.closeScopeAction();
}
}
}
I rewrote the #StephenCleary answer with this:
public sealed class AsyncLockList {
readonly Dictionary<object, SemaphoreReferenceCount> Semaphores = new Dictionary<object, SemaphoreReferenceCount>();
SemaphoreSlim GetOrCreateSemaphore(object key) {
lock (Semaphores) {
if (Semaphores.TryGetValue(key, out var item)) {
item.IncrementCount();
} else {
item = new SemaphoreReferenceCount();
Semaphores[key] = item;
}
return item.Semaphore;
}
}
public IDisposable Lock(object key) {
GetOrCreateSemaphore(key).Wait();
return new Releaser(Semaphores, key);
}
public async Task<IDisposable> LockAsync(object key) {
await GetOrCreateSemaphore(key).WaitAsync().ConfigureAwait(false);
return new Releaser(Semaphores, key);
}
sealed class SemaphoreReferenceCount {
public readonly SemaphoreSlim Semaphore = new SemaphoreSlim(1, 1);
public int Count { get; private set; } = 1;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void IncrementCount() => Count++;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void DecrementCount() => Count--;
}
sealed class Releaser : IDisposable {
readonly Dictionary<object, SemaphoreReferenceCount> Semaphores;
readonly object Key;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public Releaser(Dictionary<object, SemaphoreReferenceCount> semaphores, object key) {
Semaphores = semaphores;
Key = key;
}
public void Dispose() {
lock (Semaphores) {
var item = Semaphores[Key];
item.DecrementCount();
if (item.Count == 0)
Semaphores.Remove(Key);
item.Semaphore.Release();
}
}
}
}
Inspired by this previous answer, here is a version that supports async wait:
public class KeyedLock<TKey>
{
private readonly ConcurrentDictionary<TKey, LockInfo> _locks = new();
public int Count => _locks.Count;
public async Task<IDisposable> WaitAsync(TKey key, CancellationToken cancellationToken = default)
{
// Get the current info or create a new one.
var info = _locks.AddOrUpdate(key,
// Add
k => new LockInfo(),
// Update
(k, v) => v.Enter() ? v : new LockInfo());
try
{
await info.Semaphore.WaitAsync(cancellationToken);
return new Releaser(() => Release(key, info, true));
}
catch (OperationCanceledException)
{
// The semaphore wait was cancelled, release the lock.
Release(key, info, false);
throw;
}
}
private void Release(TKey key, LockInfo info, bool isCurrentlyLocked)
{
if (info.Leave())
{
// This was the last lock for the key.
// Only remove this exact info, in case another thread has
// already put its own info into the dictionary
// Note that this call to Remove(entry) is in fact thread safe.
var entry = new KeyValuePair<TKey, LockInfo>(key, info);
if (((ICollection<KeyValuePair<TKey, LockInfo>>)_locks).Remove(entry))
{
// This exact info was removed.
info.Dispose();
}
}
else if (isCurrentlyLocked)
{
// There is another waiter.
info.Semaphore.Release();
}
}
private class LockInfo : IDisposable
{
private SemaphoreSlim _semaphore = null;
private int _refCount = 1;
public SemaphoreSlim Semaphore
{
get
{
// Lazily create the semaphore.
var s = _semaphore;
if (s is null)
{
s = new SemaphoreSlim(1, 1);
// Assign _semaphore if its current value is null.
var original = Interlocked.CompareExchange(ref _semaphore, s, null);
// If someone else already created a semaphore, return that one
if (original is not null)
{
s.Dispose();
return original;
}
}
return s;
}
}
// Returns true if successful
public bool Enter()
{
if (Interlocked.Increment(ref _refCount) > 1)
{
return true;
}
// This lock info is not valid anymore - its semaphore is or will be disposed.
return false;
}
// Returns true if this lock info is now ready for removal
public bool Leave()
{
if (Interlocked.Decrement(ref _refCount) <= 0)
{
// This was the last lock
return true;
}
// There is another waiter
return false;
}
public void Dispose() => _semaphore?.Dispose();
}
private sealed class Releaser : IDisposable
{
private readonly Action _dispose;
public Releaser(Action dispose) => _dispose = dispose;
public void Dispose() => _dispose();
}
}

Processing data by multiple threads simultaneously

We have an application that regularly receives multimedia messages, and should reply to them.
We currently do this with a single thread, first receiving messages, and then processing them one by one. This does the job, but is slow.
So we're now thinking of doing the same process but with multiple threads sumultaneously.
Any simple way to allow parallel processing of the incoming records, yet avoid erroneously processing the same record by two threads?
Any simple way to allow parallel processing of the incoming records, yet avoid erroneously processing the same record by two threads?
Yes it is actually not too hard, what you are wanting to do is called the "Producer-Consumer model"
If your message receiver could only handle one thread at a time but your message "processor" can work on multiple messages at once you just need to use a BlockingCollection to store the work that needs to be processed
public sealed class MessageProcessor : IDisposable
{
public MessageProcessor()
: this(-1)
{
}
public MessageProcessor(int maxThreadsForProcessing)
{
_maxThreadsForProcessing = maxThreadsForProcessing;
_messages = new BlockingCollection<Message>();
_cts = new CancellationTokenSource();
_messageProcessorThread = new Thread(ProcessMessages);
_messageProcessorThread.IsBackground = true;
_messageProcessorThread.Name = "Message Processor Thread";
_messageProcessorThread.Start();
}
public int MaxThreadsForProcessing
{
get { return _maxThreadsForProcessing; }
}
private readonly BlockingCollection<Message> _messages;
private readonly CancellationTokenSource _cts;
private readonly Thread _messageProcessorThread;
private bool _disposed = false;
private readonly int _maxThreadsForProcessing;
/// <summary>
/// Add a new message to be queued up and processed in the background.
/// </summary>
public void ReceiveMessage(Message message)
{
_messages.Add(message);
}
/// <summary>
/// Signals the system to stop processing messages.
/// </summary>
/// <param name="finishQueue">Should the queue of messages waiting to be processed be allowed to finish</param>
public void Stop(bool finishQueue)
{
_messages.CompleteAdding();
if(!finishQueue)
_cts.Cancel();
//Wait for the message processor thread to finish it's work.
_messageProcessorThread.Join();
}
/// <summary>
/// The background thread that processes messages in the system
/// </summary>
private void ProcessMessages()
{
try
{
Parallel.ForEach(_messages.GetConsumingEnumerable(),
new ParallelOptions()
{
CancellationToken = _cts.Token,
MaxDegreeOfParallelism = MaxThreadsForProcessing
},
ProcessMessage);
}
catch (OperationCanceledException)
{
//Don't care that it happened, just don't want it to bubble up as a unhandeled exception.
}
}
private void ProcessMessage(Message message, ParallelLoopState loopState)
{
//Here be dragons! (or your code to process a message, your choice :-))
//Use if(_cts.Token.IsCancellationRequested || loopState.ShouldExitCurrentIteration) to test if
// we should quit out of the function early for a graceful shutdown.
}
public void Dispose()
{
if(!_disposed)
{
if(_cts != null && _messages != null && _messageProcessorThread != null)
Stop(true); //This line will block till all queued messages have been processed, if you want it to be quicker you need to call `Stop(false)` before you dispose the object.
if(_cts != null)
_cts.Dispose();
if(_messages != null)
_messages.Dispose();
GC.SuppressFinalize(this);
_disposed = true;
}
}
~MessageProcessor()
{
//Nothing to do, just making FXCop happy.
}
}
I highly recommend you read the free book Patterns for Parallel Programming, it goes in to some detail about this. There is a entire section explaining the Producer-Consumer model in detail.
UPDATE: There are some performance issues with GetConsumingEnumerable() and Parallel.ForEach(, instead use the library ParallelExtensionsExtras and it's new extension method GetConsumingPartitioner()
public static Partitioner<T> GetConsumingPartitioner<T>(
this BlockingCollection<T> collection)
{
return new BlockingCollectionPartitioner<T>(collection);
}
private class BlockingCollectionPartitioner<T> : Partitioner<T>
{
private BlockingCollection<T> _collection;
internal BlockingCollectionPartitioner(
BlockingCollection<T> collection)
{
if (collection == null)
throw new ArgumentNullException("collection");
_collection = collection;
}
public override bool SupportsDynamicPartitions {
get { return true; }
}
public override IList<IEnumerator<T>> GetPartitions(
int partitionCount)
{
if (partitionCount < 1)
throw new ArgumentOutOfRangeException("partitionCount");
var dynamicPartitioner = GetDynamicPartitions();
return Enumerable.Range(0, partitionCount).Select(_ =>
dynamicPartitioner.GetEnumerator()).ToArray();
}
public override IEnumerable<T> GetDynamicPartitions()
{
return _collection.GetConsumingEnumerable();
}
}

producer-consumer with a resource

I'm trying to implement the producer/consumer pattern with a set of resources, so each thread has one resource associated with it. For example, I may have a queue of tasks where each task requires a StreamWriter to write its result. Each task also has to have parameters passed to it.
I started with Joseph Albahari's implementation (see below for my modified version).
I replaced the queue of Action with a queue of Action<T> where T is the resource, and pass the resource associated with the thread to the Action. But, this leaves me with the problem of how to pass parameters to the Action. Obviously, the Action must be replaced with a delegate but this leaves the problem of how to pass parameters when tasks are enqueued (from outside the ProducerConsumerQueue class). Any ideas on how to do this?
class ProducerConsumerQueue<T>
{
readonly object _locker = new object();
Thread[] _workers;
Queue<Action<T>> _itemQ = new Queue<Action<T>>();
public ProducerConsumerQueue(T[] resources)
{
_workers = new Thread[resources.Length];
// Create and start a separate thread for each worker
for (int i = 0; i < resources.Length; i++)
{
Thread thread = new Thread(() => Consume(resources[i]));
thread.SetApartmentState(ApartmentState.STA);
_workers[i] = thread;
_workers[i].Start();
}
}
public void Shutdown(bool waitForWorkers)
{
// Enqueue one null item per worker to make each exit.
foreach (Thread worker in _workers)
EnqueueItem(null);
// Wait for workers to finish
if (waitForWorkers)
foreach (Thread worker in _workers)
worker.Join();
}
public void EnqueueItem(Action<T> item)
{
lock (_locker)
{
_itemQ.Enqueue(item); // We must pulse because we're
Monitor.Pulse(_locker); // changing a blocking condition.
}
}
void Consume(T parameter)
{
while (true) // Keep consuming until
{ // told otherwise.
Action<T> item;
lock (_locker)
{
while (_itemQ.Count == 0) Monitor.Wait(_locker);
item = _itemQ.Dequeue();
}
if (item == null) return; // This signals our exit.
item(parameter); // Execute item.
}
}
}
The type T in ProducerConsumerQueue<T> doesn't have to be your resource it can be a composite type that contains your resource. With .NET4 the easiest way to do this is with Tuple<StreamWriter, YourParameterType>. The produce/consumer queue just eats and spits out T so in your Action<T> you can just use properties to get the resource and the parameter. If you are using Tuple you would use Item1 to get the resource and Item2 to get the parameter.
If you are not use .NET4, the process is similar but you just create your own class:
public class WorkItem<T>
{
private StreamWriter resource;
private T parameter;
public WorkItem(StreamWriter resource, T parameter)
{
this.resource = resource;
this.parameter = parameter;
}
public StreamWriter Resource { get { return resource; } }
public T Parameter { get { return parameter; } }
}
In fact, making it generic may be overdesigning for your situation. You can just define T to be the type you want it to be.
Also, for reference, there are new ways to do multi-threading included in .NET4 that may applicable to your use case such as concurrent queues and the Parallel Task Library. They can also be combined with traditional approaches such as semaphores.
Edit:
Continuing with this approach, here is a small sample class that demonstrates using:
a semaphore to control access to a limited resource
a concurrent queue to manage that resource safely between threads
task management using the Task Parallel Library
Here is the Processor class:
public class Processor
{
private const int count = 3;
private ConcurrentQueue<StreamWriter> queue = new ConcurrentQueue<StreamWriter>();
private Semaphore semaphore = new Semaphore(count, count);
public Processor()
{
// Populate the resource queue.
for (int i = 0; i < count; i++) queue.Enqueue(new StreamWriter("sample" + i));
}
public void Process(int parameter)
{
// Wait for one of our resources to become free.
semaphore.WaitOne();
StreamWriter resource;
queue.TryDequeue(out resource);
// Dispatch the work to a task.
Task.Factory.StartNew(() => Process(resource, parameter));
}
private Random random = new Random();
private void Process(StreamWriter resource, int parameter)
{
// Do work in background with resource.
Thread.Sleep(random.Next(10) * 100);
resource.WriteLine("Parameter = {0}", parameter);
queue.Enqueue(resource);
semaphore.Release();
}
}
and now we can use the class like this:
var processor = new Processor();
for (int i = 0; i < 10; i++)
processor.Process(i);
and no more than three tasks will be scheduled at the same time, each with their own StreamWriter resource which is recycled.

AutoResetEvent not blocking properly

I have a thread, which creates a variable number of worker threads and distributes tasks between them. This is solved by passing the threads a TaskQueue object, whose implementation you will see below.
These worker threads simply iterate over the TaskQueue object they were given, executing each task.
private class TaskQueue : IEnumerable<Task>
{
public int Count
{
get
{
lock(this.tasks)
{
return this.tasks.Count;
}
}
}
private readonly Queue<Task> tasks = new Queue<Task>();
private readonly AutoResetEvent taskWaitHandle = new AutoResetEvent(false);
private bool isFinishing = false;
private bool isFinished = false;
public void Enqueue(Task task)
{
Log.Trace("Entering Enqueue, lock...");
lock(this.tasks)
{
Log.Trace("Adding task, current count = {0}...", Count);
this.tasks.Enqueue(task);
if (Count == 1)
{
Log.Trace("Count = 1, so setting the wait handle...");
this.taskWaitHandle.Set();
}
}
Log.Trace("Exiting enqueue...");
}
public Task Dequeue()
{
Log.Trace("Entering Dequeue...");
if (Count == 0)
{
if (this.isFinishing)
{
Log.Trace("Finishing (before waiting) - isCompleted set, returning empty task.");
this.isFinished = true;
return new Task();
}
Log.Trace("Count = 0, lets wait for a task...");
this.taskWaitHandle.WaitOne();
Log.Trace("Wait handle let us through, Count = {0}, IsFinishing = {1}, Returned = {2}", Count, this.isFinishing);
if(this.isFinishing)
{
Log.Trace("Finishing - isCompleted set, returning empty task.");
this.isFinished = true;
return new Task();
}
}
Log.Trace("Entering task lock...");
lock(this.tasks)
{
Log.Trace("Entered task lock, about to dequeue next item, Count = {0}", Count);
return this.tasks.Dequeue();
}
}
public void Finish()
{
Log.Trace("Setting TaskQueue state to isFinishing = true and setting wait handle...");
this.isFinishing = true;
if (Count == 0)
{
this.taskWaitHandle.Set();
}
}
public IEnumerator<Task> GetEnumerator()
{
while(true)
{
Task t = Dequeue();
if(this.isFinished)
{
yield break;
}
yield return t;
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
As you can see, I'm using an AutoResetEvent object to make sure that the worker threads don't exit prematurely, i.e. before getting any tasks.
In a nutshell:
the main thread assigns a task to a thread by Enqeueue-ing a task to its TaskQueue
the main thread notifies the thread that are no more tasks to execute by calling the TaskQueue's Finish() method
the worker thread retrieves the next task assigned to it by calling the TaskQueue's Dequeue() method
The problem is that the Dequeue() method often throws an InvalidOperationException, saying that the Queue is empty. As you can see I added some logging, and it turns out, that the AutoResetEvent doesn't block the Dequeue(), even though there were no calls to its Set() method.
As I understand it, calling AutoResetEvent.Set() will allow a waiting thread to proceed (who previously called AutoResetEvent.WaitOne()), and then automatically calls AutoResetEvent.Reset(), blocking the next waiter.
So what can be wrong? Did I get something wrong? Do I have an error somewhere?
I'm sitting above this for 3 hours now, but I cannot figure out what's wrong.
Please help me!
Thank you very much!
Your dequeue code is incorrect. You check the Count under lock, then fly by the seams of your pants, and then you expect the tasks to have something. You cannot retain assumptions while you release the lock :). Your Count check and tasks.Dequeue must occur under lock:
bool TryDequeue(out Tasks task)
{
task = null;
lock (this.tasks) {
if (0 < tasks.Count) {
task = tasks.Dequeue();
}
}
if (null == task) {
Log.Trace ("Queue was empty");
}
return null != task;
}
You Enqueue() code is similarly riddled with problems. Your Enqueue/Dequeue don't ensure progress (you will have dequeue threads blocked waiting even though there are items in the queue). Your signature of Enqueue() is wrong. Overall your post is very very poor code. Frankly, I think you're trying to chew more than you can bite here... Oh, and never log under lock.
I strongly suggest you just use ConcurrentQueue.
If you don't have access to .Net 4.0 here is an implementation to get you started:
public class ConcurrentQueue<T>:IEnumerable<T>
{
volatile bool fFinished = false;
ManualResetEvent eventAdded = new ManualResetEvent(false);
private Queue<T> queue = new Queue<T>();
private object syncRoot = new object();
public void SetFinished()
{
lock (syncRoot)
{
fFinished = true;
eventAdded.Set();
}
}
public void Enqueue(T t)
{
Debug.Assert (false == fFinished);
lock (syncRoot)
{
queue.Enqueue(t);
eventAdded.Set();
}
}
private bool Dequeue(out T t)
{
do
{
lock (syncRoot)
{
if (0 < queue.Count)
{
t = queue.Dequeue();
return true;
}
if (false == fFinished)
{
eventAdded.Reset ();
}
}
if (false == fFinished)
{
eventAdded.WaitOne();
}
else
{
break;
}
} while (true);
t = default(T);
return false;
}
public IEnumerator<T> GetEnumerator()
{
T t;
while (Dequeue(out t))
{
yield return t;
}
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
A more detailed answer from me is pending, but I just want to point out something very important.
If you're using .NET 3.5, you can use the ConcurrentQueue<T> class. A backport is included in the Rx extensions library, which is available for .NET 3.5.
Since you want blocking behavior, you would need to wrap a ConcurrentQueue<T> in a BlockingCollection<T> (also available as part of Rx).
It looks like you are trying to replicate a blocking queue. One already exists in the .NET 4.0 BCL as a BlockingCollection. If .NET 4.0 is not an option for you then you can use this code. It use the Monitor.Wait and Monitor.Pulse method instead of AutoResetEvent.
public class BlockingCollection<T>
{
private Queue<T> m_Queue = new Queue<T>();
public T Take() // Dequeue
{
lock (m_Queue)
{
while (m_Queue.Count <= 0)
{
Monitor.Wait(m_Queue);
}
return m_Queue.Dequeue();
}
}
public void Add(T data) // Enqueue
{
lock (m_Queue)
{
m_Queue.Enqueue(data);
Monitor.Pulse(m_Queue);
}
}
}
Update:
I am fairly certain that it is not possible to implement a producer-consumer queue using AutoResetEvent if you want it to be thread-safe for multiple producers and multiple consumers (I am prepared to be proven wrong if someone can come up with a counter example). Sure, you will see examples on the internet, but they are all wrong. In fact, one such attempt by Microsoft is flawed in that the queue can get live-locked.

Categories