The description of AutoResetEvent in MSDN
I'm trying to port a Thread Pool implemented in C# to C++ under Linux. I don't know which functions I should use that have similar behaviors to "AutoResetEvent".
An AutoResetEvent is most akin to a binary semaphore. People saying "conditional variables" aren't wrong per se, but condition variables are used in similar situations, rather than being similar objects. You can implement an (unnamed) AutoResetEvent on top of condition variables:
#include <pthread.h>
#include <stdio.h>
class AutoResetEvent
{
public:
explicit AutoResetEvent(bool initial = false);
~AutoResetEvent();
void Set();
void Reset();
bool WaitOne();
private:
AutoResetEvent(const AutoResetEvent&);
AutoResetEvent& operator=(const AutoResetEvent&); // non-copyable
bool flag_;
pthread_mutex_t protect_;
pthread_cond_t signal_;
};
AutoResetEvent::AutoResetEvent(bool initial)
: flag_(initial)
{
pthread_mutex_init(&protect_, NULL);
pthread_cond_init(&signal_, NULL);
}
void AutoResetEvent::Set()
{
pthread_mutex_lock(&protect_);
flag_ = true;
pthread_mutex_unlock(&protect_);
pthread_cond_signal(&signal_);
}
void AutoResetEvent::Reset()
{
pthread_mutex_lock(&protect_);
flag_ = false;
pthread_mutex_unlock(&protect_);
}
bool AutoResetEvent::WaitOne()
{
pthread_mutex_lock(&protect_);
while( !flag_ ) // prevent spurious wakeups from doing harm
pthread_cond_wait(&signal_, &protect_);
flag_ = false; // waiting resets the flag
pthread_mutex_unlock(&protect_);
return true;
}
AutoResetEvent::~AutoResetEvent()
{
pthread_mutex_destroy(&protect_);
pthread_cond_destroy(&signal_);
}
AutoResetEvent event;
void *otherthread(void *)
{
event.WaitOne();
printf("Hello from other thread!\n");
return NULL;
}
int main()
{
pthread_t h;
pthread_create(&h, NULL, &otherthread, NULL);
printf("Hello from the first thread\n");
event.Set();
pthread_join(h, NULL);
return 0;
}
If however, you need named auto reset events, you'll likely want to look at semaphores, and may have a slightly more difficult time translating your code. Either way I would look careful at the documentation for pthreads on your platform, condition variables and auto reset events are not the same and do not behave the same.
I'm pretty sure you're looking for condition variables. The accepted answer to this other SO question: Condition variables in C# -- seems to confirm it.
See e.g. this tutorial for details on condition variables in POSIX threads.
Conditional variables are NOT the equivalent of AutoResetEvent. They are the equivalent of Monitors. The difference is critical and may cause deadlocks if not used properly:
Imagine two threads A and B in a C# program. A calls WaitOne() and B calls Set(). If B executes Set() before A reaches the call to WaitOne(), there is no problem because the signal sent to the AutoResetEvent() by Set() is persistent and it will remain set until a WaitOne() is executed.
Now in C, imagine two threads C and D. C calls wait(), D calls notify(). If C is waiting already when D calls notify() everything is ok. If C did not manage to reach wait() before D calls notify(), you have a deadlock because the signal is lost if nobody is waiting on it and the status of the conditional variable is still "unset".
Be very careful about this.
You can easily re-implement Win32 API Event objects using POSIX mutexes and condition variables.
However some of the comments above make me state this:
A condition variable is not analogous to an Event object. A condition variable is fundamentally different from an Event in that it does not have memory or state, in the sense that if there isn't anyone blocked at the condition variable at the time you call pthread_cond_signal or pthread_cond_broadcast nothing will happen, in particular if a thread comes later to block via pthread_cond_wait it will block.
I'l try to sketch a quick auto-reset event implementation:
class event
{
public:
event(): signalled_ (false) {}
void signal ()
{
std::unique_lock<std::mutex> lock(mutex_);
signalled_ = true;
cond_.notify_one ();
}
void wait ()
{
std::unique_lock<std::mutex> lock(mutex_);
while (!signalled_)
cond_.wait (lock);
signalled_ = false;
}
protected:
std::mutex mutex_;
std::condition_variable cond_;
bool signalled_;
};
The example from Boost's Thread/Condition documentation is pretty similar to the normal ManualResetEvent and AutoResetEvent usage:
http://www.boost.org/doc/libs/1_53_0/doc/html/thread/synchronization.html#thread.synchronization.condvar_ref
(I've made some small edits for clarity)
boost::condition_variable cond;
boost::mutex mut;
bool data_ready;
void wait_for_data_to_process()
{
boost::unique_lock<boost::mutex> lock(mut);
while(!data_ready)
{
cond.wait(lock);
}
}
void prepare_data_for_processing()
{
{ //scope for lock_guard
boost::lock_guard<boost::mutex> lock(mut);
data_ready=true;
}
cond.notify_one();
}
Note that conditions provide the wait/notify mechanism of AutoResetEvent and ManualResetEvent but require a mutex to work.
Well, odds are it's most like a mutex -- you have a number of callers going for a shared resource, but only one is allowed in. In the mutex case, callers would try to get the mutex (e.g. phtread_mutex_lock), do their thing, then release (pthread_mutex_unlock) so that some other caller can then get in.
I know this may be a little late to the party and I have no information about the performance differences, but it might be a viable alternative to use a combination pthread_kill and sigwait, like so:
Declare the following, where appropriate:
int sigin;
sigset_t sigset;
initialize the previous variables in the following way:
sigemptyset(&sigset);
sigaddset(&sigset, SIGUSR1);
pthread_sigmask(SIG_BLOCK, &sigset, null);
in the waiting thread, call sigwait:
sigwait(&sigset, &sigin);
Then, on the thread that is supposed to wake the waiting thread, you could do this:
pthread_kill(p_handle, SIGUSR1);
where p_handle is the handle to the thread you wish to unblock.
This example blocks the waiting thread until SIGUSR1 is delivered. The signal only reaches that specific thread because of using pthread_kill.
Related
I want to know, if trying to update a boolean value being used by a thread is guaranteed to be successful, without any lock protection.
like the following case:
there wont be any problem for Stop() to change the boolean member of m_ThreadActive, while threadproc is running?
private bool m_ThreadActive = true;
public void threadproc
{
while (m_ThreadActive)
{
...
}
}
public void Stop()
{
m_ThreadActive = false;
}
It is theoretically possible that the compiler could optimise the loop in such a way that the loop variable always remains true.
To ensure that can't happen, use a Volatile.Read():
while (Volatile.Read(ref ThreadActive))
If you don't have a version of .Net which supports Volatile.Read() you could declare m_ThreadActive as volatile:
private volatile bool m_ThreadActive = true;
Or, better, use Thread.MemoryBarrier():
while (ThreadActive)
{
Thread.MemoryBarrier();
// ...
}
See my answer here for a program that demonstrates a requirement for volatile, Volatile.Read() or Thread.MemoryBarrier() for it to work correctly.
For more information on why the use of the volatile keyword can be a bit suspect, see this article from Eric Lippert.
When events trigger, they use threads from the threadpool. So if you have a bunch of events that trigger faster than they return, you drain your threadpool. So whenever you have an event handler method that doesn't have any other control to limit the rate of threads entering, and doesn't have any guarantee of returning quickly, and you're not painstakingly implementing 100% thread-safe code inside that method, it's probably best to implement some thread control. The obvious simple thing to do would be to lock() inside the event handling method, but if you do that, all the threads after the first one will block in queue, waiting to enter the lock region, hogging all your threads from threadpool. It is probably better to detect another thread is inside this method, and quickly abort instead.
The question is: I have a way of detecting another thread already running, and quickly aborting the subsequent threads. But it doesn't seem very C#-ish due to the use of "const" and manually handling a locking flag at a low level. Is there a better way?
This is basically a direct replication of the lock() functionality, but using a non-blocking Interlocked.Exchange, instead of using the blocking Monitor.Enter()
public class FooGoo
{
private const int LOCKED = 0; // could use any arbitrary value; I choose 0
private const int UNLOCKED = LOCKED + 1; // any arbitrary value, != LOCKED
private static int _myLock = UNLOCKED;
void myEventHandler()
{
int previousValue = Interlocked.Exchange(ref _myLock, LOCKED);
if ( previousValue == UNLOCKED )
{
try
{
// some handling code, which may or may not return quickly
// maybe not threadsafe
}
finally
{
_myLock = UNLOCKED;
}
}
else
{
// another thread is executing right now. So I will abort.
//
// optional and environment-specific, maybe you want to
// queue some event information or set a flag or something,
// so you remember later that this thread aborted
}
}
}
So far, this is the best answer I have found. Does there exist any shorthand equivalent of a non-blocking lock() to shorten this up?
static object _myLock;
void myMethod ()
{
if ( Monitor.TryEnter(_myLock) )
{
try
{
// Do stuff
}
finally
{
Monitor.Exit(_myLock);
}
}
else
{
// then I failed to get the lock. Optionally do stuff.
}
}
Is there any reason why you would create locks around the getter and setter of a boolean property like this?
private _lockObject = new object();
private bool _myFlag;
public bool MyFlag
{
get
{
lock (_lockObject)
{
return _myFlag;
}
}
set
{
lock (_lockObject)
{
_myFlag = value;
}
}
}
Well, you don't need locks necessarily - but if you want one thread to definitely read the value that another thread has written, you either need locks or a volatile variable.
I've personally given up trying to understand the precise meaning of volatile. I try to avoid writing my own lock-free code, instead relying on experts who really understand the memory model.
EDIT: As an example of the kind of problem this can cause, consider this code:
using System;
using System.Threading;
public class Test
{
private static bool stop = false;
private bool Stop
{
get { return stop; }
set { stop = value; }
}
private static void Main()
{
Thread t = new Thread(DoWork);
t.Start();
Thread.Sleep(1000); // Let it get started
Console.WriteLine("Setting stop flag");
Stop = true;
Console.WriteLine("Set");
t.Join();
}
private static void DoWork()
{
Console.WriteLine("Tight looping...");
while (!Stop)
{
}
Console.WriteLine("Done.");
}
}
That program may or may not terminate. I've seen both happen. There's no guarantee that the "reading" thread will actually read from main memory - it can put the initial value of stop into a register and just keep using that forever. I've seen that happen, in reality. It doesn't happen on my current machines, but it may do on my next.
Putting locks within the property getter/setter as per the code in the question would make this code correct and its behaviour predictable.
For more on this, see this blog post by Eric Lippert.
Reads and writes of bool are atomic.
However the name "flag" indicates that separate threads will be reading/writing until some condition occurred. To avoid unexpected behavior due to optimization you should consider adding the volatile keyword to you bool declaration.
There's no reason to have a lock right there.
Taking a lock may well be appropriate in your design, but it's very doubtful that this is the right granularity.
You need to make your design thread-safe, not individual properties (or even entire objects).
I am trying to learn the threading in C#. Today I sow the following code at http://www.albahari.com/threading/:
class ThreadTest
{
bool done;
static void Main()
{
ThreadTest tt = new ThreadTest(); // Create a common instance
new Thread (tt.Go).Start();
tt.Go();
}
// Note that Go is now an instance method
void Go()
{
if (!done) { done = true; Console.WriteLine ("Done"); }
}
}
In Java unless you define the "done" as volatile the code will not be safe. How does C# memory model handles this?
Guys, Thanks all for the answers. Much appreciated.
Well, there's the clear race condition that they could both see done as false and execute the if body - that's true regardless of memory model. Making done volatile won't fix that, and it wouldn't fix it in Java either.
But yes, it's feasible that the change made in one thread could happen but not be visible until in the other thread. It depends on CPU architecture etc. As an example of what I mean, consider this program:
using System;
using System.Threading;
class Test
{
private bool stop = false;
static void Main()
{
new Test().Start();
}
void Start()
{
new Thread(ThreadJob).Start();
Thread.Sleep(500);
stop = true;
}
void ThreadJob()
{
int x = 0;
while (!stop)
{
x++;
}
Console.WriteLine("Counted to {0}", x);
}
}
While on my current laptop this does terminate, I've used other machines where pretty much the exact same code would run forever - it would never "see" the change to stop in the second thread.
Basically, I try to avoid writing lock-free code unless it's using higher-level abstractions provided by people who really know their stuff - like the Parallel Extensions in .NET 4.
There is a way to make this code lock-free and correct easily though, using Interlocked. For example:
class ThreadTest
{
int done;
static void Main()
{
ThreadTest tt = new ThreadTest(); // Create a common instance
new Thread (tt.Go).Start();
tt.Go();
}
// Note that Go is now an instance method
void Go()
{
if (Interlocked.CompareExchange(ref done, 1, 0) == 0)
{
Console.WriteLine("Done");
}
}
}
Here the change of value and the testing of it are performed as a single unit: CompareExchange will only set the value to 1 if it's currently 0, and will return the old value. So only a single thread will ever see a return value of 0.
Another thing to bear in mind: your question is fairly ambiguous, as you haven't defined what you mean by "thread safe". I've guessed at your intention, but you never made it clear. Read this blog post by Eric Lippert - it's well worth it.
No, it's not thread safe. You could potentially have one thread check the condition (if(!done)), the other thread check that same condition, and then the first thread executes the first line in the code block (done = true).
You can make it thread safe with a lock:
lock(this)
{
if(!done)
{
done = true;
Console.WriteLine("Done");
}
}
Even in Java with volatile, both threads could enter the block with the WriteLine.
If you want mutual exclusion you need to use a real synchronisation object such as a lock.
onle way this is thread safe is when you use atomic compare and set in the if test
if(atomicBool.compareAndSet(false,true)){
Console.WriteLine("Done");
}
You should do something like this:
class ThreadTest{
Object myLock = new Object();
...
void Go(){
lock(myLock){
if(!done)
{
done = true;
Console.WriteLine("Done");
}
}
}
The reason you want to use an generic object, rather than "this", is that if your object (aka "this") changes at all it is considered another object. Thus your lock does not work any more.
Another small thing you might consider is this. It is a "good practices" thing, so nothing severe.
class ThreadTest{
Object myLock = new Object();
...
void Go(){
lock(myLock){
if(!done)
{
done = true;
}
}
//This line of code does not belong inside the lock.
Console.WriteLine("Done");
}
Never have code inside a lock that does not need to be inside a lock. This is due to the delay this causes. If you have lots of threads you can gain a lot of performance from removing all this unnecessary waiting.
Hope it helps :)
I want to know a proper way to start and stop a threaded job forced and unforced. Is this the proper way to stop a Thread?
public class ProcessDataJob : IJob
{
private ConcurrentQueue<byte[]> _dataQueue = new ConcurrentQueue<byte[]>();
private volatile bool _stop = false;
private volatile bool _forceStop = false;
private Thread _thread;
private int _timeOut = 1000;
public void Start()
{
_stop = false;
_forceStop = false;
_thread = new Thread(ProcessData);
_thread.Start();
}
private void ProcessData()
{
while (!_stop || _dataQueue.Count > 0)
{
if(_forceStop) return;
byte[] data;
if(_dataQueue.TryDequeue(data))
{
//Process data
//.....//
}
}
}
public void Stop(bool force)
{
_stop = true;
_forceStop = force;
_thread.Join(_timeOut);
}
public void Enqueue(byte[] data)
{
_dataQueue.Enqueue(data);
}
}
There is no proper way to forcibly kill a thread.
There are several ways to do it, but none of them are proper.
Forcibly killing a thread is what you should do only if you need to terminate the program, or unload the appdomain containing the thread, and don´t care about any data structures left dangling in a corrupted/bad/locked state, because they will be gone in a short while as well.
There´s plenty of advice on the internet about how bad/evil Thread.Abort is, so don´t do it.
Instead, write proper cooperative threading. The thread(s) should themselves check a flag (event, volatile bool field, etc.) and then voluntairly exit when nicely asked to do so.
That is the proper way.
This is the way that I've done it in the past, with one difference. I've had threads unexpectedly hang in the past, which means that that loop of yours doesn't come back with an answer. To address this I have all thread using classes, like yours, register with a 'manager' class who is responsible for participating in key things like Forced Stop. The threaded class has a reference to the manager, and when a forced stop is done it calls a method on the manager which effectively is a timer. If by the time the timer has gone off the threaded class hasn't set a state flag to STOPPED then the manager calls abort on it.
The key thing for me was not just the calling 'stop' but the confirmation that 'stop' had occurred, and understanding that it will be a non-deterministic amount of time but that 'after a reasonable amount of time' that I should give up and move on.
You could use the .NET ThreadPool class, this way you don't have to handle the stack yourself.