I have a class that provides thread-safe access to LinkedList<> (adding and reading items).
class LinkedListManager {
public static object locker = new object();
public static LinkedList<AddXmlNodeArgs> tasks { get; set; }
public static EventWaitHandle wh { get; set; }
public void AddItemThreadSafe(AddXmlNodeArgs task) {
lock (locker)
tasks.AddLast(task);
wh.Set();
}
public LinkedListNode<AddXmlNodeArgs> GetNextItemThreadSafe(LinkedListNode<AddXmlNodeArgs> prevItem) {
LinkedListNode<AddXmlNodeArgs> nextItem;
if (prevItem == null) {
lock (locker)
return tasks.First;
}
lock (locker) // *1
nextItem = prevItem.Next;
if (nextItem == null) { // *2
wh.WaitOne();
return prevItem.Next;
}
lock (locker)
return nextItem;
}
}
}
I have 3 threads: 1st - writes data to tasks; 2nd and 3rd - read data from tasks.
In 2nd and 3rd threads I retrieve data from tasks by calling GetNextItemThreadSafe().
The problem is that sometimes GetNextItemThreadSafe() returns null, when parameter of method (prevItem) is not null`.
Question:
Can a thread somehow jump over lock(locker) (// *1) and get to // *2 at once ??
I think it's the only way to get a return value = null from GetNextItemThreadSafe()...
I've spend a whole day to find the mistake, but it's extremely hard because it seems to be almost impossible to debug it step by step (tasks contains 5.000 elements and error occurs whenever it wants). Btw sometimes program works fine - without exception.
I'm new to threads so maybe I'm asking silly questions...
Not clear what you're trying to achieve. Are both threads supposed to get the same elements of the linked list ? Or are you trying to have 2 threads process the tasks out of the list in parallel ? If it's the second case, then what you are doing cannot work. You'd better look at BlockingCollection which is thread-safe and designed for this kind of multi-threaded producers/consumers patterns.
A lock is only active when executing the block of code declared following the lock. Since you lock multiple times on single commands, this effectively degenerates to only locking the single command that follows the lock, after which another thread is free to jump in and consume the data. Perhaps what you meant is this:
public LinkedListNode<AddXmlNodeArgs> GetNextItemThreadSafe(LinkedListNode<AddXmlNodeArgs> prevItem) {
LinkedListNode<AddXmlNodeArgs> nextItem;
LinkedListNode<AddXmlNodeArgs> returnItem;
lock(locker) { // Lock the entire method contents to make it atomic
if (prevItem == null) {
returnItem = tasks.First;
}
// *1
nextItem = prevItem.Next;
if (nextItem == null) { // *2
// wh.WaitOne(); // Waiting in a locked block is not a good idea
returnItem = prevItem.Next;
}
returnItem = nextItem;
}
return returnItem;
}
}
Note that only assignments (as opposed to returns) occur within the locked block and there is a single return point at the bottom of the method.
I think the solution is the following:
In your Add method, add the node and set the EventWaitHandle both inside the same lock
In the Get method, inside a lock, check if the next element is empty and inside the same lock, Reset the EventWaitHandle. Outside of the lock, wait on the EventWaitHandle.
Related
I'm trying to understand thread-safe access to fields. For this, i implemented some test sample:
class Program
{
public static void Main()
{
Foo test = new Foo();
bool temp;
new Thread(() => { test.Loop = false; }).Start();
do
{
temp = test.Loop;
}
while (temp == true);
}
}
class Foo
{
public bool Loop = true;
}
As expected, sometimes it doesn't terminate. I know that this issue can be solved either with volatile keyword or with lock. I consider that i'm not author of class Foo, so i can't make field volatile. I tried using lock:
public static void Main()
{
Foo test = new Foo();
object locker = new Object();
bool temp;
new Thread(() => { test.Loop = false; }).Start();
do
{
lock (locker)
{
temp = test.Loop;
}
}
while (temp == true);
}
this seems to solve the issue. Just to be sure i moved the cycle inside the lock block:
lock(locker)
{
do
{
temp = test.Loop;
}
while (temp == true);
}
and... the program does not terminates anymore.
It is totally confusing me. Doesn't lock provides thread-safe access? If not, how to access non-volatile fields safely? I could use VolatileRead(), but it is not suitable for any case, like not primitive type or properties. I considered that Monitor.Enter does the job, Am i right? I don't understand how could it work.
This piece of code:
do
{
lock (locker)
{
temp = test.Loop;
}
}
while (temp == true);
works because of a side-effect of lock: it causes a 'memory-fence'. The actual locking is irrelevant here. Equivalent code:
do
{
Thread.MemoryBarrier();
temp = test.Loop;
}
while (temp == true);
And the issue you're trying to solve here is not exactly thread-safety, it is about caching of the variable (stale data).
It does not terminate anymore because you are accessing the variable outside of the lock as well.
In
new Thread(() => { test.Loop = false; }).Start();
you write to the variable outside the lock. This write is not guaranteed to be visible.
Two concurrent accesses to the same location of which at least one is a write is a data race. Don't do that.
Lock provides thread safety for 2 or more code blocks on different threads, that uses the lock.
Your Loop assignment inside the new thread declaration is not enclosed in lock.
That means there is no thread safety there.
In general, no, lock is not something that will magically make all code inside it thread-safe.
The simple rule is: If you have some data that's shared by multiple threads, but you always access it only inside a lock (using the same lock object), then that access is thread-safe.
Once you leave that “simple” code and start asking questions like “How could I use volatile/VolatileRed() safely here?” or “Why does this code that doesn't use lock properly seem to work?”, things get complicated quickly. And you should probably avoid that, unless you're prepared to spend a lot of time learning about the C# memory model. And even then, bugs that manifest only once in million runs or only on certain CPUs (ARM) are very easy to make.
Locking only works when all access to the field is controlled by a lock. In your example only the reading is locked, but since the writing is not, there is no thread-safety.
However it is also crucial that the locking takes place on a shared object, otherwise there is no way for another thread to know that someone is trying to access the field. So in your case when locking on an object which is only scoped inside the Main method, any other call on another thread, would not be able to block.
If you have no way to change Foo, the only way to obtain thread-safety is to have ALL calls actually lock on the same Foo instance. This would generally not be recommended though, since all methods on the object would be locked.
The volatile keyword is not a guarantuee of thread-safety in itself. It is meant to indicate that the value of a field can be changed from different threads, and so any thread reading that field, should not cache it, since the value could change.
To achieve thread-safety, Foo should probably look something along these lines:
class Program
{
public static void Main()
{
Foo test = new Foo();
test.Run();
new Thread(() => { test.Loop = false; }).Start();
do
{
temp = test.Loop;
}
while (temp == true);
}
}
class Foo
{
private volatile bool _loop = true;
private object _syncRoot = new object();
public bool Loop
{
// All access to the Loop value, is controlled by a lock on an instance-scoped object. I.e. when one thread accesses the value, all other threads are blocked.
get { lock(_syncRoot) return _loop; }
set { lock(_syncRoot) _loop = value; }
}
public void Run()
{
Task(() =>
{
while(_loop) // _loop is volatile, so value is not cached
{
// Do something
}
});
}
}
Assume we have two threads working with a method that execute below:
while(true){
if(Queue.Count() <= 0){
wait();
}
object anObject = Queue.Dequeue();
}
Now the problem occurs when Queue has one element init, Thread 1 is about to execute Queue.Count line, Thread 2 about is on Queue.Dequeue() and execution priority is on Thread 1.
As this situation occurs, Thread 1 will throw an exception because, Queue.Count() will return 1 and it will try to dequeue an object from an empty queue. How can I handle this? What is the best solution if I want to dequeue safely? Should I use syncronized or lock something?
Best regards,
Kemal
The best solution, assuming you are using .NET 4.0 or higher and really need a queue, is to switch to using ConcurrentQueue and it's TryDequeue method. ConcurrentQueue is thread safe.
That said, it looks from your code snippet like what you are really looking for is a thread safe producer/consumer queue. In that case, use the BlockingCollection class and it's Take method:
while(true){
// This will block until an item becomes available to take.
// It is also thread safe, and can be called by multiple
// threads simultaneously. When an item is added, only one
// waiting thread will Take() it
object anObject = myBlockingCollection.Take();
// do something with anObject
}
You can use thread safe queue ConcurrentQueue.
or if you don't want to use it
while (true)
{
Monitor.Enter(lockObj);
try
{
if (Queue.Count <= 0)
{
Monitor.Wait(lockObj);
}
object anObject = Queue.Dequeue();
}
finally
{
Monitor.Exit(lockObj);
}
}
or if using lock
while (true)
{
lock(lockObj)
{
if (Queue.Count <= 0)
{
Monitor.Wait(lockObj);
}
object anObject = Queue.Dequeue();
}
}
Try this pattern:
Producer
public void Produce(object o)
{
lock (_queueLock)
{
_queue.Enqueue(o);
Monitor.Pulse(_queueLock);
}
}
Consumer
public object Consume()
{
lock (_queueLock)
{
while (_queue.Count==0)
{
Monitor.Wait(_queueLock);
}
return _queue.Dequeue();
}
}
Lock the queue before accessing it.
lock (Queue) {
// blah blah
}
EDIT
while(true){
lock (Queue) {
if (Queue.Count() > 0) {
// Dequeue only if there is still something in the queue
object anObject = Queue.Dequeue();
}
}
}
i have built a Producer Consumer queue wrapping a ConcurrentQueue of .net 4.0 with SlimManualResetEvent signaling between the producing (Enqueue) and the consuming (while(true) thread based.
the queue looks like:
public class ProducerConsumerQueue<T> : IDisposable, IProducerConsumerQueue<T>
{
private bool _IsActive=true;
public int Count
{
get
{
return this._workerQueue.Count;
}
}
public bool IsActive
{
get { return _IsActive; }
set { _IsActive = value; }
}
public event Dequeued<T> OnDequeued = delegate { };
public event LoggedHandler OnLogged = delegate { };
private ConcurrentQueue<T> _workerQueue = new ConcurrentQueue<T>();
private object _locker = new object();
Thread[] _workers;
#region IDisposable Members
int _workerCount=0;
ManualResetEventSlim _mres = new ManualResetEventSlim();
public void Dispose()
{
_IsActive = false;
_mres.Set();
LogWriter.Write("55555555555");
for (int i = 0; i < _workerCount; i++)
// Wait for the consumer's thread to finish.
{
_workers[i].Join();
}
LogWriter.Write("6666666666");
// Release any OS resources.
}
public ProducerConsumerQueue(int workerCount)
{
try
{
_workerCount = workerCount;
_workers = new Thread[workerCount];
// Create and start a separate thread for each worker
for (int i = 0; i < workerCount; i++)
(_workers[i] = new Thread(Work)).Start();
}
catch (Exception ex)
{
OnLogged(ex.Message + ex.StackTrace);
}
}
#endregion
#region IProducerConsumerQueue<T> Members
public void EnqueueTask(T task)
{
if (_IsActive)
{
_workerQueue.Enqueue(task);
//Monitor.Pulse(_locker);
_mres.Set();
}
}
public void Work()
{
while (_IsActive)
{
try
{
T item = Dequeue();
if (item != null)
OnDequeued(item);
}
catch (Exception ex)
{
OnLogged(ex.Message + ex.StackTrace);
}
}
}
#endregion
private T Dequeue()
{
try
{
T dequeueItem;
//if (_workerQueue.Count > 0)
//{
_workerQueue.TryDequeue(out dequeueItem);
if (dequeueItem != null)
return dequeueItem;
//}
if (_IsActive)
{
_mres.Wait();
_mres.Reset();
}
//_workerQueue.TryDequeue(out dequeueItem);
return dequeueItem;
}
catch (Exception ex)
{
OnLogged(ex.Message + ex.StackTrace);
T dequeueItem;
//if (_workerQueue.Count > 0)
//{
_workerQueue.TryDequeue(out dequeueItem);
return dequeueItem;
}
}
public void Clear()
{
_workerQueue = new ConcurrentQueue<T>();
}
}
}
when calling Dispose it sometimes blocks on the join (one thread consuming) and the dispose method is stuck. i guess it get's stuck on the Wait of the resetEvents but for that i call the set on the dispose.
any suggestions?
Update: I understand your point about needing a queue internally. My suggestion to use a BlockingCollection<T> is based on the fact that your code contains a lot of logic to provide the blocking behavior. Writing such logic yourself is very prone to bugs (I know this from experience); so when there's an existing class within the framework that does at least some of the work for you, it's generally preferable to go with that.
A complete example of how you can implement this class using a BlockingCollection<T> is a little bit too large to include in this answer, so I've posted a working example on pastebin.com; feel free to take a look and see what you think.
I also wrote an example program demonstrating the above example here.
Is my code correct? I wouldn't say yes with too much confidence; after all, I haven't written unit tests, run any diagnostics on it, etc. It's just a basic draft to give you an idea how using BlockingCollection<T> instead of ConcurrentQueue<T> cleans up a lot of your logic (in my opinion) and makes it easier to focus on the main purpose of your class (consuming items from a queue and notifying subscribers) rather than a somewhat difficult aspect of its implementation (the blocking behavior of the internal queue).
Question posed in a comment:
Any reason you're not using BlockingCollection<T>?
Your answer:
[...] i needed a queue.
From the MSDN documentation on the default constructor for the BlockingCollection<T> class:
The default underlying collection is a ConcurrentQueue<T>.
If the only reason you opted to implement your own class instead of using BlockingCollection<T> is that you need a FIFO queue, well then... you might want to rethink your decision. A BlockingCollection<T> instantiated using the default parameterless constructor is a FIFO queue.
That said, while I don't think I can offer a comprehensive analysis of the code you've posted, I can at least offer a couple of pointers:
I'd be very hesitant to use events in the way that you are here for a class that deals with such tricky multithreaded behavior. Calling code can attach any event handlers it wants, and these can in turn throw exceptions (which you don't catch), block for long periods of time, or possibly even deadlock for reasons completely outside your control--which is very bad in the case of a blocking queue.
There's a race condition in your Dequeue and Dispose methods.
Look at these lines of your Dequeue method:
if (_IsActive) // point A
{
_mres.Wait(); // point C
_mres.Reset(); // point D
}
And now take a look at these two lines from Dispose:
_IsActive = false;
_mres.Set(); // point B
Let's say you have three threads, T1, T2, and T3. T1 and T2 are both at point A, where each checks _IsActive and finds true. Then Dispose is called, and T3 sets _IsActive to false (but T1 and T2 have already passed point A) and then reaches point B, where it calls _mres.Set(). Then T1 gets to point C, moves on to point D, and calls _mres.Reset(). Now T2 reaches point C and will be stuck forever since _mres.Set will not be called again (any thread executing Enqueue will find _IsActive == false and return immediately, and the thread executing Dispose has already passed point B).
I'd be happy to try and offer some help on solving this race condition, but I'm skeptical that BlockingCollection<T> isn't in fact exactly the class you need for this. If you can provide some more information to convince me that this isn't the case, maybe I'll take another look.
Since _IsActive isn't marked as volatile and there's no lock around all access, each core can have a separate cache for this value and that cache may never get refreshed. So marking _IsActive to false in Dispose will not actually affect all running threads.
http://igoro.com/archive/volatile-keyword-in-c-memory-model-explained/
private volatile bool _IsActive=true;
I have the scenario where a command comes in over a socket which requires a fair amount of work. Only one thread can process the data at a time. The commands will come in faster than can process it. Over time there will be quiet a back log.
The good part is that I can discard waiting threads and really only have to process the last one that is waiting - (or process the first one in and discard all the other once). I was thinking about using a semaphore to control the critical section of code and to use a boolean to see if there are any threads blocking. If there are blocking thread I would just discard the thread.
My mind is drawing a blank on how to implement it nicely I would like to implement it with out using an integer or boolean to see if there is a thread waiting already.
I am coding this in c#
You can use Monitor.TryEnter to see whether a lock is already taken on an object:
void ProcessConnection(TcpClient client)
{
bool lockTaken = false;
Monitor.TryEnter(lockObject, out lockTaken);
if (!lockTaken)
{
client.Close();
return;
}
try
{
// long-running process here
}
finally
{
Monitor.Exit(lockObject);
client.Close();
}
}
Note that for this to work you'll still have to invoke the method in a thread, for example:
client = listener.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(notused => ProcessConnection(client));
FYI, the lock statement is just sugar for:
Monitor.Enter(lockObject);
try
{
// code within lock { }
}
finally
{
Monitor.Exit(lockObject);
}
I believe you are looking for the lock statement.
private readonly object _lock = new object();
private void ProccessCommand(Command command)
{
lock (_lock)
{
// ...
}
}
It sounds like you just need to use the lock statement. Code inside a lock statement will allow only one thread to work inside the code block at once.
More info: lock Statement
From the sounds of what you've posted here, you might be able to avoid so many waiting threads. You could queue up the next command to execute, and rather than keep the threads waiting, just replace the command to execute next after the current command finishes. Lock when replacing and removing the "waiting" command.
Something like this:
class CommandHandler
{
Action nextCommand;
ManualResetEvent manualResetEvent = new ManualResetEvent(false);
object lockObject = new object();
public CommandHandler()
{
new Thread(ProcessCommands).Start();
}
public void AddCommand(Action nextCommandToProcess)
{
lock (lockObject)
{
nextCommand = nextCommandToProcess;
}
manualResetEvent.Set();
}
private void ProcessCommands()
{
while (true)
{
Action action = null;
lock (lockObject)
{
action = nextCommand;
nextCommand = null;
}
if (action != null)
{
action();
}
lock (lockObject)
{
if(nextCommand != null)
continue;
manualResetEvent.Reset();
}
manualResetEvent.WaitOne();
}
}
}
check out: ManualResetEvent
It's a useful threading class.
I have scenarios where I need a main thread to wait until every one of a set of possible more than 64 threads have completed their work, and for that I wrote the following helper utility, (to avoid the 64 waithandle limit on WaitHandle.WaitAll())
public static void WaitAll(WaitHandle[] handles)
{
if (handles == null)
throw new ArgumentNullException("handles",
"WaitHandle[] handles was null");
foreach (WaitHandle wh in handles) wh.WaitOne();
}
With this utility method, however, each waithandle is only examined after every preceding one in the array has been signalled... so it is in effect synchronous, and will not work if the waithandles are autoResetEvent wait handles (which clear as soon as a waiting thread has been released)
To fix this issue I am considering changing this code to the following, but would like others to check and see if it looks like it will work, or if anyone sees any issues with it, or can suggest a better way ...
Thanks in advance:
public static void WaitAllParallel(WaitHandle[] handles)
{
if (handles == null)
throw new ArgumentNullException("handles",
"WaitHandle[] handles was null");
int actThreadCount = handles.Length;
object locker = new object();
foreach (WaitHandle wh in handles)
{
WaitHandle qwH = wh;
ThreadPool.QueueUserWorkItem(
delegate
{
try { qwH.WaitOne(); }
finally { lock(locker) --actThreadCount; }
});
}
while (actThreadCount > 0) Thread.Sleep(80);
}
If you know how many threads you have, you can use an interlocked decrement. This is how I usually do it:
{
eventDone = new AutoResetEvent();
totalCount = 128;
for(0...128) {ThreadPool.QueueUserWorkItem(ThreadWorker, ...);}
}
void ThreadWorker(object state)
try
{
... work and more work
}
finally
{
int runningCount = Interlocked.Decrement(ref totalCount);
if (0 == runningCount)
{
// This is the last thread, notify the waiters
eventDone.Set();
}
}
Actually, most times I don't even signal but instead invoke a callback continues the processing from where the waiter would continue. Less blocked threads, more scalability.
I know is different and may not apply to your case (eg. for sure will not work if some of thoe handles are not threads, but I/O or events), but it may worth thinking about this.
I'm not sure what exactly you're trying to do, but would a CountdownEvent (.NET 4.0) conceptually solve your problem?
I'm not a C# or .NET programmer, but you could use a semaphore that is posted when one of your worker threads exits. The monitoring thread would simply wait on the semaphore n times where n is the number of worker threads. Semaphores are traditionally used to count resources in use but they can be used to count jobs completed by waiting on the same semaphore for n times.
When working with lots of simultaneous threads, I prefer to add each thread's ManagedThreadId into a Dictionary when I start the thread, and then have each thread invoke a callback routine that removes the dying thread's id from the Dictionary. The Dictionary's Count property tells you how many threads are active. Use the value side of the key/value pair to hold info that your UI thread can use to report status. Wrap the Dictionary with a lock to keep things safe.
ThreadPool.QueueUserWorkItem(o =>
{
try
{
using (var h = (o as WaitHandle))
{
if (!h.WaitOne(100000))
{
// Alert main thread of the timeout
}
}
}
finally
{
Interlocked.Decrement(ref actThreadCount);
}
}, wh);