System.Lazy<T> with different thread-safety mode [duplicate] - c#

This question already has answers here:
Lazy<T> without exception caching
(7 answers)
Closed 19 days ago.
.NET 4.0's System.Lazy<T> class offers three Thread-Safety modes via the enum LazyThreadSafetyMode, which I'll summarise as:
LazyThreadSafetyMode.None - Not thread safe.
LazyThreadSafetyMode.ExecutionAndPublication - Only one concurrent thread will attempt to create the underlying value. On successful creation, all waiting threads will receive the same value. If an unhandled exception occurs during creation, it will be re-thrown on each waiting thread, cached and re-thrown on each subsequent attempt to access the underlying value.
LazyThreadSafetyMode.PublicationOnly - Multiple concurrent threads will attempt to create the underlying value but the first to succeed will determine the value passed to all threads. If an unhandled exception occurs during creation, it will not be cached and concurrent & subsequent attempts to access the underlying value will re-try the creation & may succeed.
I'd like to have a lazy-initialized value which follows slightly different thread-safety rules, namely:
Only one concurrent thread will attempt to create the underlying value. On successful creation, all waiting threads will receive the same value. If an unhandled exception occurs during creation, it will be re-thrown on each waiting thread, but it will not be cached and subsequent attempts to access the underlying value will re-try the creation & may succeed.
So the key differince with LazyThreadSafetyMode.ExecutionAndPublication is that if a "first go" at creation fails, it can be re-attempted at a later time.
Is there an existing (.NET 4.0) class that offers these semantics, or will I have to roll my own? If I roll my own is there a smart way to re-use the existing Lazy<T> within the implementation to avoid explicit locking/synchronization?
N.B. For a use case, imagine that "creation" is potentially expensive and prone to intermittent error, involving e.g. getting a large chunk of data from a remote server. I wouldn't want to make multiple concurrent attempts to get the data since they'll likely all fail or all succeed. However, if they fail, I'd like to be able to retry later on.

Only one concurrent thread will attempt to create the underlying
value. On successful creation, all waiting threads will receive the
same value. If an unhandled exception occurs during creation, it will
be re-thrown on each waiting thread, but it will not be cached and
subsequent attempts to access the underlying value will re-try the
creation & may succeed.
Since Lazy doesn't support that, you could try to roll it on your own:
private static object syncRoot = new object();
private static object value = null;
public static object Value
{
get
{
if (value == null)
{
lock (syncRoot)
{
if (value == null)
{
// Only one concurrent thread will attempt to create the underlying value.
// And if `GetTheValueFromSomewhere` throws an exception, then the value field
// will not be assigned to anything and later access
// to the Value property will retry. As far as the exception
// is concerned it will obviously be propagated
// to the consumer of the Value getter
value = GetTheValueFromSomewhere();
}
}
}
return value;
}
}
UPDATE:
In order to meet your requirement about same exception propagated to all waiting reader threads:
private static Lazy<object> lazy = new Lazy<object>(GetTheValueFromSomewhere);
public static object Value
{
get
{
try
{
return lazy.Value;
}
catch
{
// We recreate the lazy field so that subsequent readers
// don't just get a cached exception but rather attempt
// to call the GetTheValueFromSomewhere() expensive method
// in order to calculate the value again
lazy = new Lazy<object>(GetTheValueFromSomewhere);
// Re-throw the exception so that all blocked reader threads
// will get this exact same exception thrown.
throw;
}
}
}

Something like this might help:
using System;
using System.Threading;
namespace ADifferentLazy
{
/// <summary>
/// Basically the same as Lazy with LazyThreadSafetyMode of ExecutionAndPublication, BUT exceptions are not cached
/// </summary>
public class LazyWithNoExceptionCaching<T>
{
private Func<T> valueFactory;
private T value = default(T);
private readonly object lockObject = new object();
private bool initialized = false;
private static readonly Func<T> ALREADY_INVOKED_SENTINEL = () => default(T);
public LazyWithNoExceptionCaching(Func<T> valueFactory)
{
this.valueFactory = valueFactory;
}
public bool IsValueCreated
{
get { return initialized; }
}
public T Value
{
get
{
//Mimic LazyInitializer.EnsureInitialized()'s double-checked locking, whilst allowing control flow to clear valueFactory on successful initialisation
if (Volatile.Read(ref initialized))
return value;
lock (lockObject)
{
if (Volatile.Read(ref initialized))
return value;
value = valueFactory();
Volatile.Write(ref initialized, true);
}
valueFactory = ALREADY_INVOKED_SENTINEL;
return value;
}
}
}
}

Lazy does not support this. This is a design problem with Lazy because exception "caching" means that that lazy instance will not provide a real value forever. This can bring applications down permanently due to transient errors such as network problems. Human intervention is usually required then.
I bet this landmine exists in quite a few .NET apps...
You need to write your own lazy to do this. Or, open a CoreFx Github issue for this.

My attempt at a version of Darin's updated answer that doesn't have the race condition I pointed out... warning, I'm not completely sure this is finally completely free of race conditions.
private static int waiters = 0;
private static volatile Lazy<object> lazy = new Lazy<object>(GetValueFromSomewhere);
public static object Value
{
get
{
Lazy<object> currLazy = lazy;
if (currLazy.IsValueCreated)
return currLazy.Value;
Interlocked.Increment(ref waiters);
try
{
return lazy.Value;
// just leave "waiters" at whatever it is... no harm in it.
}
catch
{
if (Interlocked.Decrement(ref waiters) == 0)
lazy = new Lazy<object>(GetValueFromSomewhere);
throw;
}
}
}
Update: I thought I found a race condition after posting this. The behavior should actually be acceptable, as long as you're OK with a presumably rare case where some thread throws an exception it observed from a slow Lazy<T> after another thread has already returned from a successful fast Lazy<T> (future requests will all succeed).
waiters = 0
t1: comes in runs up to just before the Interlocked.Decrement (waiters = 1)
t2: comes in and runs up to just before the Interlocked.Increment (waiters = 1)
t1: does its Interlocked.Decrement and prepares to overwrite (waiters = 0)
t2: runs up to just before the Interlocked.Decrement (waiters = 1)
t1: overwrites lazy with a new one (call it lazy1) (waiters = 1)
t3: comes in and blocks on lazy1 (waiters = 2)
t2: does its Interlocked.Decrement (waiters = 1)
t3: gets and returns the value from lazy1 (waiters is now irrelevant)
t2: rethrows its exception
I can't come up with a sequence of events that will cause something worse than "this thread threw an exception after another thread yielded a successful result".
Update2: declared lazy as volatile to ensure that the guarded overwrite is seen by all readers immediately. Some people (myself included) see volatile and immediately think "well, that's probably being used incorrectly", and they're usually right. Here's why I used it here: in the sequence of events from the example above, t3 could still read the old lazy instead of lazy1 if it was positioned just before the read of lazy.Value the moment that t1 modified lazy to contain lazy1. volatile protects against that so that the next attempt can start immediately.
I've also reminded myself why I had this thing in the back of my head saying "low-lock concurrent programming is hard, just use a C# lock statement!!!" the entire time I was writing the original answer.
Update3: just changed some text in Update2 pointing out the actual circumstance that makes volatile necessary -- the Interlocked operations used here are apparently implemented full-fence on the important CPU architectures of today and not half-fence as I had originally just sort-of assumed, so volatile protects a much narrower section than I had originally thought.

Partially inspired by Darin's answer, but trying to get this "queue of waiting threads that are inflicted with the exception" and the "try again" features working:
private static Task<object> _fetcher = null;
private static object _value = null;
public static object Value
{
get
{
if (_value != null) return _value;
//We're "locking" then
var tcs = new TaskCompletionSource<object>();
var tsk = Interlocked.CompareExchange(ref _fetcher, tcs.Task, null);
if (tsk == null) //We won the race to set up the task
{
try
{
var result = new object(); //Whatever the real, expensive operation is
tcs.SetResult(result);
_value = result;
return result;
}
catch (Exception ex)
{
Interlocked.Exchange(ref _fetcher, null); //We failed. Let someone else try again in the future
tcs.SetException(ex);
throw;
}
}
tsk.Wait(); //Someone else is doing the work
return tsk.Result;
}
}
I am slightly concerned though - can anyone see any obvious races here where it will fail in an unobvious way?

Related

Handling concurrency at group level rather than application level

I would like to handle the concurrent issue in the API. Here is a situation where we get a request from multiple users for the same group. There can be multiple groups as well. Below solution i think should work, correct me
// This will be a singleton across the API
ConcurrentDictionary<string, string> dict = new ConcurrentDictionary<string, string>();
if (dict.ContainsKey(groupId)) {
throw new Exception("request already accepted");
} else {
// Thinking this is thread lock operation or i can put lock statement
if(dict.TryAdd(groupId, "Added") == false) {
throw new Exception("request already accepted");
}
// continue the original logic
}
After every 10 minutes, we will clean off the older keys in dictionary (note this operation should work normal i.e. like thread is not locked mode because it will be working on already used and old keys). Does concurrent dictionary have thread locking at key level rather than dictionary level? so that we don't block all the requests instead we only block particular requests related to the group. Any help is greatly appreciated.
One quick solution is having lock wrapper around get and add of dictionary operation but this would stop all the requests from proceeding, we want to block at group level. Any help is greatly appreciated.
Adding stuff into a concurrent dictionary is a very fast operation. You are also not making threads wait for the first one to finish, you are throwing right away if they cannot acquire the lock.
That makes me think that probably Double Checked Lock is not really needed for your case
So, I would simply do your inner check without the outer one:
if(dict.TryAdd(groupId, "Added") == false)
{
throw new Exception("request already accepted");
}
If you have waaaay too many request after the first one, then I would do what you have done, since ContainsKey will not lock
Another interesting topic is how you are going to clean this.
maybe you could do all this locking in an IDisposable object that can remove itself at dispose time. For example:
// NOTE: THIS IS JUST PSEUDOCODE
// In your controller, you can simply do this...
//
public SomeController()
{
using (var operation = new GroupOperation(groupId))
{
// In here I am sure I am the only operation of this group
}
// In here I am sure that the operation got removed from the dictionary
}
// This class hides all the complexity of the concurrent dictionary
//
public class GroupOperation : IDisposable
{
var singletonDictionary = new ConcurrentDictionary<int,int>()
int GroupId;
public GroupOperation(int GroupID)
{
this.GroupId = GroupId;
if(!singletonDictionary.TryADd(GroupID, 1))
{
throw new Exception("Sorry, operation in progress for your group");
}
}
protected virtual void Dispose(bool disposing)
{
singletonDictionary.Remove(GroupId)
}
}

Chaining tasks with delays

I have a need to keep track of a task and potentially queue up another task after some delay so the way I'm thinking of doing it looks something like this:
private Task lastTask;
public void DoSomeTask()
{
if (lastTask == null)
{
lastTask = Task.FromResult(false);
}
lastTask = lastTask.ContinueWith(t =>
{
// do some task
}).ContinueWith(t => Task.Delay(250).Wait());
}
My question is, if I do something like this, creating potentially long chains of tasks is will the older tasks be disposed or will they end up sticking around forever because the ContinueWith takes the last task as a parameter (so it's a closure). If so, how can I chain tasks while avoiding this problem?
Is there a better way to do this?
Task.Delay(250).Wait()
You know you're doing something wrong when you use Wait in code you're trying to make asynchronous. That's one wasted thread doing nothing.
The following would be much better:
lastTask = lastTask.ContinueWith(t =>
{
// do some task
}).ContinueWith(t => Task.Delay(250)).Unwrap();
ContinueWith returns a Task<Task>, and the Unwrap call turns that into a Task which will complete when the inner task does.
Now, to answer your question, let's take a look at what the compiler generates:
public void DoSomeTask()
{
if (this.lastTask == null)
this.lastTask = (Task) Task.FromResult<bool>(false);
// ISSUE: method pointer
// ISSUE: method pointer
this.lastTask = this.lastTask
.ContinueWith(
Program.<>c.<>9__2_0
?? (Program.<>c.<>9__2_0 = new Action<Task>((object) Program.<>c.<>9, __methodptr(<DoSomeTask>b__2_0))))
.ContinueWith<Task>(
Program.<>c.<>9__2_1
?? (Program.<>c.<>9__2_1 = new Func<Task, Task>((object) Program.<>c.<>9, __methodptr(<DoSomeTask>b__2_1))))
.Unwrap();
}
[CompilerGenerated]
[Serializable]
private sealed class <>c
{
public static readonly Program.<>c <>9;
public static Action<Task> <>9__2_0;
public static Func<Task, Task> <>9__2_1;
static <>c()
{
Program.<>c.<>9 = new Program.<>c();
}
public <>c()
{
base.\u002Ector();
}
internal void <DoSomeTask>b__2_0(Task t)
{
}
internal Task <DoSomeTask>b__2_1(Task t)
{
return Task.Delay(250);
}
}
This was decompiled with dotPeek in "show me all the guts" mode.
Look at this part:
.ContinueWith<Task>(
Program.<>c.<>9__2_1
?? (Program.<>c.<>9__2_1 = new Func<Task, Task>((object) Program.<>c.<>9, __methodptr(<DoSomeTask>b__2_1))))
The ContinueWith function is given a singleton delegate. So, there's no closing over any variable there.
Now, there's this function:
internal Task <DoSomeTask>b__2_1(Task t)
{
return Task.Delay(250);
}
The t here is a reference to the previous task. Notice something? It's never used. The JIT will mark this local as being unreachable, and the GC will be able to clean it. With optimizations enabled, the JIT will aggressively mark locals that are eligible for collection, even to the point that an instance method can be executing while the instance is being collected by the GC, if said instance method doesn't reference this in the code left to execute.
Now, one last thing, there's the m_parent field in the Task class, which is not good for your scenario. But as long as you're not using TaskCreationOptions.AttachedToParent you should be fine. You could always add the DenyChildAttach flag for extra safety and self-documentation.
Here's the function which deals with that:
internal static Task InternalCurrentIfAttached(TaskCreationOptions creationOptions)
{
return (creationOptions & TaskCreationOptions.AttachedToParent) != 0 ? InternalCurrent : null;
}
So, you should be safe here. If you want to be sure, run a memory profiler on a long chain, and see for yourself.
if I do something like this, creating potentially long chains of tasks is will the older tasks be disposed
Tasks do not require explicit disposal, as they don't contain unmanaged resources.
will they end up sticking around forever because the ContinueWith takes the last task as a parameter (so it's a closure)
It's not a closure. A closure is an anonymous method using a variable from outside the scope of that anonymous method in its body. You're not doing that, so you're not closing over it. Each Task does however have a field where it keeps track of its parent, so the managed Task object will still be accessible if you're using this pattern.
Take a look at the source code of the ContinuationTaskFromTask class. It has the following code:
internal override void InnerInvoke()
{
// Get and null out the antecedent. This is crucial to avoid a memory
// leak with long chains of continuations.
var antecedent = m_antecedent;
Contract.Assert(antecedent != null,
"No antecedent was set for the ContinuationTaskFromTask.");
m_antecedent = null;
m_antecedent is the field that holds a reference to the antecedent ask. The developers here explicitly set it to null (after it is no longer needed) to make sure that there is no memory leak with long chains of continuations, which I guess is your concern.

"Fixed" / "Load Balanced" C# thread pool?

I have a 3rd party component that is "expensive" to spin up. This component is not thread safe. Said component is hosted inside of a WCF service (for now), so... every time a call comes into the service I have to new up the component.
What I'd like to do instead is have a pool of say 16 threads that each spin up their own copy of the component and have a mechanism to call the method and have it distributed to one of the 16 threads and have the value returned.
So something simple like:
var response = threadPool.CallMethod(param1, param2);
Its fine for the call to block until it gets a response as I need the response to proceed.
Any suggestions? Maybe I'm overthinking it and a ConcurrentQueue that is serviced by 16 threads would do the job, but now sure how the method return value would get returned to the caller?
WCF will already use the thread pool to manage its resources so if you add a layer of thread management on top of that it is only going to go badly. Avoid doing that if possible as you will get contention on your service calls.
What I would do in your situation is just use a single ThreadLocal or thread static that would get initialized with your expensive object once. Thereafter it would be available to the thread pool thread.
That is assuming that your object is fine on an MTA thread; I'm guessing it is from your post since it sounds like things are current working, but just slow.
There is the concern that too many objects get created and you use too much memory as the pool grows too large. However, see if this is the case in practice before doing anything else. This is a very simple strategy to implement so easy to trial. Only get more complex if you really need to.
First and foremost, I agree with #briantyler: ThreadLocal<T> or thread static fields is probably what you want. You should go with that as a starting point and consider other options if it doesn't meet your needs.
A complicated but flexible alternative is a singleton object pool. In its most simple form your pool type will look like this:
public sealed class ObjectPool<T>
{
private readonly ConcurrentQueue<T> __objects = new ConcurrentQueue<T>();
private readonly Func<T> __factory;
public ObjectPool(Func<T> factory)
{
__factory = factory;
}
public T Get()
{
T obj;
return __objects.TryDequeue(out obj) ? obj : __factory();
}
public void Return(T obj)
{
__objects.Enqueue(obj);
}
}
This doesn't seem awfully useful if you're thinking of type T in terms of primitive classes or structs (i.e. ObjectPool<MyComponent>), as the pool does not have any threading controls built in. But you can substitute your type T for a Lazy<T> or Task<T> monad, and get exactly what you want.
Pool initialisation:
Func<Task<MyComponent>> factory = () => Task.Run(() => new MyComponent());
ObjectPool<Task<MyComponent>> pool = new ObjectPool<Task<MyComponent>>(factory);
// "Pre-warm up" the pool with 16 concurrent tasks.
// This starts the tasks on the thread pool and
// returns immediately without blocking.
for (int i = 0; i < 16; i++) {
pool.Return(pool.Get());
}
Usage:
// Get a pooled task or create a new one. The task may
// have already completed, in which case Result will
// be available immediately. If the task is still
// in flight, accessing its Result will block.
Task<MyComponent> task = pool.Get();
try
{
MyComponent component = task.Result; // Alternatively you can "await task"
// Do something with component.
}
finally
{
pool.Return(task);
}
This method is more complex than maintaining your component in a ThreadLocal or thread static field, but if you need to do something fancy like limiting the number of pooled instances, the pool abstraction can be quite useful.
EDIT
Basic "fixed set of X instances" pool implementation with a Get which blocks once the pool has been drained:
public sealed class ObjectPool<T>
{
private readonly Queue<T> __objects;
public ObjectPool(IEnumerable<T> items)
{
__objects = new Queue<T>(items);
}
public T Get()
{
lock (__objects)
{
while (__objects.Count == 0) {
Monitor.Wait(__objects);
}
return __objects.Dequeue();
}
}
public void Return(T obj)
{
lock (__objects)
{
__objects.Enqueue(obj);
Monitor.Pulse(__objects);
}
}
}

How to ensure data synchronization across threads within a "safe" area (e.g not in a critical section) without locking everything

We are using a proprietary API that requires synchronization of data at some point.
I've thought about some ways of ensuring data consistency but am eager to get more input on better solutions.
Here is a long running Task outlining the API syncing
new Task(() =>
{
while(true)
{
// Here other threads can access any API object (that's fine)
API.CriticalOperationStart(); // Between start and end no API Object may be used
API.CriticalOperationEnd();
// Here other threads can access any API object (that's fine too)
}
}, TaskCreationOptions.LongRunning).Start();
This is a separate task that actually does some data syncing.
The area between Start and End is critical. No other API call may be done while the API is in this critical step.
Here are some non guarded Threads using distinct API Objects:
// multiple calls to different API objects should not be exclusive
OtherThread1
APIObject1.SetData(42);
OtherThread2
APIObject2.SetData(43);
Constraints:
No APIObject Method is allowed to be called during the API is in the critical section.
Both SetData calls are allowed to be done simultaneously. They do not interfere with each other, only with the critical section.
Generally speaking accessing one APIObject from multiple threads is not thread-safe but accessing multiple APIObjects does not interfere with the API except during critical section.
The critical section must never be executed while any APIObject Method is used.
Guarding access to one APIObject from multiple threads is not required.
The trivial approach
Use a lock Object and lock the critical section and every call to API Objects.
This would effectively work but creates many unnecessary locks because of the fact that then also only one APIObject at a time could be accessed too.
Concurrent container of Actions
Use a single instance of a concurrent container where each modification of an APIObject is placed into a thread safe container and is executed in the task above explicitly by traversing the container outside the critical section and calling all actions. (Not a Consumer pattern, as waiting for new entries of the container must not block the task since the critical section must be executed periodically)
This imposes some drawbacks. Closure issues when capturing contexts could be one. Another would be reading from an APIObject returns old data as long as the actions in the container are not executed. Even worse if the creation of an APIObject is put in the container and subsequent code assumes it has already be created.
Make something up with Wait Handles and atomic increments
Every APIObject access could be guarded with a ManualResetEvent. The critical section would wait for the signal to be set by the APIObjects, the signal would only be set when all calls to APIObjects have finished (some sort of atomic increments/decrement around accessing APIObjects).
Sounds like a great way for deadlocks. May lock out the critical section for long periods of time when continuous APIObject calls prevent the signal from being ever set.
Does not solve the problem that APIObjects may not be accessed during critical section since this construct only guards in the other direction.
Requires additional locking (e.g Monitor.IsEntered on the critical section to not lock out simultaneous calls to distinct APIObjects).
=> Awful way, making a complex situation even more complex
If copying an APIObject is relatively inexpensive (or if it's moderately expensive and you don't sync very often) then you can put the objects in a wrapper that contains a singleton global_timestamp and a local_timestamp. When you update an object you first check to see if global_timestamp == long.MaxValue: if true, then return a destructively updated object; if global_timestamp != long.MaxValue and global_timestamp == local_timestamp, then return a destructively updated object. However if global_timestamp != long.MaxValue and global_timestamp != local_timestamp then return an updated copy of the object and set local_timestamp = global_timestamp. When you perform a sync, use an Interlocked update to set global_timestamp = DateTime.UtcNow.ToBinary, and when the sync is complete set global_timestamp = long.MaxValue. This way the rest of the program doesn't have to pause while a sync is performed, and the sync should have consistent data.
// APIObject provided to you
public class APIObject {
private string foo;
public void setFoo(string _foo) {
this.foo = _foo;
}
}
// Global Timestamp, readonly version for wrappers and readwrite version for sync
public class GlobalTimestamp {
protected long timestamp = long.MaxValue;
public long getTimestamp() {
return timestamp;
}
}
public class GlobalTimestampRW extends GlobalTimestamp {
public void startSync(long _timestamp) {
long value = System.Threading.Interlocked.CompareExchange(ref timestamp, _timestamp, long.MaxValue);
if(value != long.MaxValue) throw exception; // somebody else called this method already
}
public void endSync(long _timestamp) {
long value = System.Threading.Interlocked.CompareExchange(ref timestamp, long.MaxValue, _timestamp);
if(value != _timestamp) throw exception; // somebody else called this method already
}
}
// Wrapper
public class APIWrapper {
private APIObject apiObject;
private GlobalTimestamp globalTimestamp;
private long localTimestamp = long.MinValue;
public APIObject setFoo(string _foo) {
long tempGlobalTimestamp = globalTimestamp.getTimestamp();
if(tempGlobalTimestamp == long.MaxValue || tempGlobalTimestamp == localTimestamp) {
apiObject.setFoo(_foo);
return apiObject;
} else {
apiObject = apiObject.copy();
apiObject.setFoo(_foo);
localTimestamp = tempGlobalTimestamp;
return apiObject;
}
}
}
GlobalTimestampRW globalTimestamp;
new Task(() =>
{
while(true)
{
long timestamp = DateTime.UtcNow.ToBinary();
globalTimestamp.startSync(timestamp);
API.CriticalOperationStart(); // Between start and end no API Object may be used
API.CriticalOperationEnd();
globalTimestamp.endSync(timestamp);
}
}, TaskCreationOptions.LongRunning).Start();

Producer Consumer queue does not dispose

i have built a Producer Consumer queue wrapping a ConcurrentQueue of .net 4.0 with SlimManualResetEvent signaling between the producing (Enqueue) and the consuming (while(true) thread based.
the queue looks like:
public class ProducerConsumerQueue<T> : IDisposable, IProducerConsumerQueue<T>
{
private bool _IsActive=true;
public int Count
{
get
{
return this._workerQueue.Count;
}
}
public bool IsActive
{
get { return _IsActive; }
set { _IsActive = value; }
}
public event Dequeued<T> OnDequeued = delegate { };
public event LoggedHandler OnLogged = delegate { };
private ConcurrentQueue<T> _workerQueue = new ConcurrentQueue<T>();
private object _locker = new object();
Thread[] _workers;
#region IDisposable Members
int _workerCount=0;
ManualResetEventSlim _mres = new ManualResetEventSlim();
public void Dispose()
{
_IsActive = false;
_mres.Set();
LogWriter.Write("55555555555");
for (int i = 0; i < _workerCount; i++)
// Wait for the consumer's thread to finish.
{
_workers[i].Join();
}
LogWriter.Write("6666666666");
// Release any OS resources.
}
public ProducerConsumerQueue(int workerCount)
{
try
{
_workerCount = workerCount;
_workers = new Thread[workerCount];
// Create and start a separate thread for each worker
for (int i = 0; i < workerCount; i++)
(_workers[i] = new Thread(Work)).Start();
}
catch (Exception ex)
{
OnLogged(ex.Message + ex.StackTrace);
}
}
#endregion
#region IProducerConsumerQueue<T> Members
public void EnqueueTask(T task)
{
if (_IsActive)
{
_workerQueue.Enqueue(task);
//Monitor.Pulse(_locker);
_mres.Set();
}
}
public void Work()
{
while (_IsActive)
{
try
{
T item = Dequeue();
if (item != null)
OnDequeued(item);
}
catch (Exception ex)
{
OnLogged(ex.Message + ex.StackTrace);
}
}
}
#endregion
private T Dequeue()
{
try
{
T dequeueItem;
//if (_workerQueue.Count > 0)
//{
_workerQueue.TryDequeue(out dequeueItem);
if (dequeueItem != null)
return dequeueItem;
//}
if (_IsActive)
{
_mres.Wait();
_mres.Reset();
}
//_workerQueue.TryDequeue(out dequeueItem);
return dequeueItem;
}
catch (Exception ex)
{
OnLogged(ex.Message + ex.StackTrace);
T dequeueItem;
//if (_workerQueue.Count > 0)
//{
_workerQueue.TryDequeue(out dequeueItem);
return dequeueItem;
}
}
public void Clear()
{
_workerQueue = new ConcurrentQueue<T>();
}
}
}
when calling Dispose it sometimes blocks on the join (one thread consuming) and the dispose method is stuck. i guess it get's stuck on the Wait of the resetEvents but for that i call the set on the dispose.
any suggestions?
Update: I understand your point about needing a queue internally. My suggestion to use a BlockingCollection<T> is based on the fact that your code contains a lot of logic to provide the blocking behavior. Writing such logic yourself is very prone to bugs (I know this from experience); so when there's an existing class within the framework that does at least some of the work for you, it's generally preferable to go with that.
A complete example of how you can implement this class using a BlockingCollection<T> is a little bit too large to include in this answer, so I've posted a working example on pastebin.com; feel free to take a look and see what you think.
I also wrote an example program demonstrating the above example here.
Is my code correct? I wouldn't say yes with too much confidence; after all, I haven't written unit tests, run any diagnostics on it, etc. It's just a basic draft to give you an idea how using BlockingCollection<T> instead of ConcurrentQueue<T> cleans up a lot of your logic (in my opinion) and makes it easier to focus on the main purpose of your class (consuming items from a queue and notifying subscribers) rather than a somewhat difficult aspect of its implementation (the blocking behavior of the internal queue).
Question posed in a comment:
Any reason you're not using BlockingCollection<T>?
Your answer:
[...] i needed a queue.
From the MSDN documentation on the default constructor for the BlockingCollection<T> class:
The default underlying collection is a ConcurrentQueue<T>.
If the only reason you opted to implement your own class instead of using BlockingCollection<T> is that you need a FIFO queue, well then... you might want to rethink your decision. A BlockingCollection<T> instantiated using the default parameterless constructor is a FIFO queue.
That said, while I don't think I can offer a comprehensive analysis of the code you've posted, I can at least offer a couple of pointers:
I'd be very hesitant to use events in the way that you are here for a class that deals with such tricky multithreaded behavior. Calling code can attach any event handlers it wants, and these can in turn throw exceptions (which you don't catch), block for long periods of time, or possibly even deadlock for reasons completely outside your control--which is very bad in the case of a blocking queue.
There's a race condition in your Dequeue and Dispose methods.
Look at these lines of your Dequeue method:
if (_IsActive) // point A
{
_mres.Wait(); // point C
_mres.Reset(); // point D
}
And now take a look at these two lines from Dispose:
_IsActive = false;
_mres.Set(); // point B
Let's say you have three threads, T1, T2, and T3. T1 and T2 are both at point A, where each checks _IsActive and finds true. Then Dispose is called, and T3 sets _IsActive to false (but T1 and T2 have already passed point A) and then reaches point B, where it calls _mres.Set(). Then T1 gets to point C, moves on to point D, and calls _mres.Reset(). Now T2 reaches point C and will be stuck forever since _mres.Set will not be called again (any thread executing Enqueue will find _IsActive == false and return immediately, and the thread executing Dispose has already passed point B).
I'd be happy to try and offer some help on solving this race condition, but I'm skeptical that BlockingCollection<T> isn't in fact exactly the class you need for this. If you can provide some more information to convince me that this isn't the case, maybe I'll take another look.
Since _IsActive isn't marked as volatile and there's no lock around all access, each core can have a separate cache for this value and that cache may never get refreshed. So marking _IsActive to false in Dispose will not actually affect all running threads.
http://igoro.com/archive/volatile-keyword-in-c-memory-model-explained/
private volatile bool _IsActive=true;

Categories