Update a bool value being used by a thread without lock protection - c#

I want to know, if trying to update a boolean value being used by a thread is guaranteed to be successful, without any lock protection.
like the following case:
there wont be any problem for Stop() to change the boolean member of m_ThreadActive, while threadproc is running?
private bool m_ThreadActive = true;
public void threadproc
{
while (m_ThreadActive)
{
...
}
}
public void Stop()
{
m_ThreadActive = false;
}

It is theoretically possible that the compiler could optimise the loop in such a way that the loop variable always remains true.
To ensure that can't happen, use a Volatile.Read():
while (Volatile.Read(ref ThreadActive))
If you don't have a version of .Net which supports Volatile.Read() you could declare m_ThreadActive as volatile:
private volatile bool m_ThreadActive = true;
Or, better, use Thread.MemoryBarrier():
while (ThreadActive)
{
Thread.MemoryBarrier();
// ...
}
See my answer here for a program that demonstrates a requirement for volatile, Volatile.Read() or Thread.MemoryBarrier() for it to work correctly.
For more information on why the use of the volatile keyword can be a bit suspect, see this article from Eric Lippert.

Related

Nonblocking and low latency method to pass values between two main loop threads?

In my project there is an audio thread updating with about 86 fps and a graphics thread which runs at 60 fps. Both threads can produce and consume values from each other.
But it is not necessary to consume every value, only the latest one is important and no notification is required because the threads just ask for a new value when they need one.
After reading tons of websites about threading I am a bit confused what I really need, because my task is quite simple. With locks my code would look like:
private T aField; //memory location
//other thread reads value
public void ReadValue(ref T val)
{
lock(myLock) copy aField to val;
}
//this thread updates value
private void UpdateValue(T newVal)
{
lock(myLock) copy newVal to aField;
}
My first question is, would this work for primitive types like float or int (<=32bit of size) without any lock because the copy is only one assignment which is atomic?
The next idea was a protection by a bool:
private T aField; //memory location
private volatile bool isReading;
private volatile bool isWriting;
//other thread reads value
public void ReadValue(ref T val)
{
isReading = true;
if(!isWriting) copy aField to val;
isReading = false;
}
//this thread updates value
private void UpdateValue(T newVal)
{
isWriting = true;
if(!isReading) copy newVal to aField;
isWriting = false;
}
Looks good to me but i am pretty sure i missed something. I could think of a worst case scenario when the faster thread reads while the slow thread wants to write. then the fast thread will read again the older value the next time, because no update was done.
What i also found was a nonblocking update method, but i wonder if and how it can help me:
static void LockFreeUpdate<T> (ref T field, Func <T, T> updateFunction)
where T : class
{
var spinWait = new SpinWait();
while (true)
{
T snapshot1 = field;
T calc = updateFunction (snapshot1);
T snapshot2 = Interlocked.CompareExchange (ref field, calc, snapshot1);
if (snapshot1 == snapshot2) return;
spinWait.SpinOnce();
}
}
What is the most efficient method with the lowest latency?
for your case you do not need any locks, just add volatile to private T aField; to prevent any possible compiler optimizations

What is the C++ equivalent for AutoResetEvent under Linux?

The description of AutoResetEvent in MSDN
I'm trying to port a Thread Pool implemented in C# to C++ under Linux. I don't know which functions I should use that have similar behaviors to "AutoResetEvent".
An AutoResetEvent is most akin to a binary semaphore. People saying "conditional variables" aren't wrong per se, but condition variables are used in similar situations, rather than being similar objects. You can implement an (unnamed) AutoResetEvent on top of condition variables:
#include <pthread.h>
#include <stdio.h>
class AutoResetEvent
{
public:
explicit AutoResetEvent(bool initial = false);
~AutoResetEvent();
void Set();
void Reset();
bool WaitOne();
private:
AutoResetEvent(const AutoResetEvent&);
AutoResetEvent& operator=(const AutoResetEvent&); // non-copyable
bool flag_;
pthread_mutex_t protect_;
pthread_cond_t signal_;
};
AutoResetEvent::AutoResetEvent(bool initial)
: flag_(initial)
{
pthread_mutex_init(&protect_, NULL);
pthread_cond_init(&signal_, NULL);
}
void AutoResetEvent::Set()
{
pthread_mutex_lock(&protect_);
flag_ = true;
pthread_mutex_unlock(&protect_);
pthread_cond_signal(&signal_);
}
void AutoResetEvent::Reset()
{
pthread_mutex_lock(&protect_);
flag_ = false;
pthread_mutex_unlock(&protect_);
}
bool AutoResetEvent::WaitOne()
{
pthread_mutex_lock(&protect_);
while( !flag_ ) // prevent spurious wakeups from doing harm
pthread_cond_wait(&signal_, &protect_);
flag_ = false; // waiting resets the flag
pthread_mutex_unlock(&protect_);
return true;
}
AutoResetEvent::~AutoResetEvent()
{
pthread_mutex_destroy(&protect_);
pthread_cond_destroy(&signal_);
}
AutoResetEvent event;
void *otherthread(void *)
{
event.WaitOne();
printf("Hello from other thread!\n");
return NULL;
}
int main()
{
pthread_t h;
pthread_create(&h, NULL, &otherthread, NULL);
printf("Hello from the first thread\n");
event.Set();
pthread_join(h, NULL);
return 0;
}
If however, you need named auto reset events, you'll likely want to look at semaphores, and may have a slightly more difficult time translating your code. Either way I would look careful at the documentation for pthreads on your platform, condition variables and auto reset events are not the same and do not behave the same.
I'm pretty sure you're looking for condition variables. The accepted answer to this other SO question: Condition variables in C# -- seems to confirm it.
See e.g. this tutorial for details on condition variables in POSIX threads.
Conditional variables are NOT the equivalent of AutoResetEvent. They are the equivalent of Monitors. The difference is critical and may cause deadlocks if not used properly:
Imagine two threads A and B in a C# program. A calls WaitOne() and B calls Set(). If B executes Set() before A reaches the call to WaitOne(), there is no problem because the signal sent to the AutoResetEvent() by Set() is persistent and it will remain set until a WaitOne() is executed.
Now in C, imagine two threads C and D. C calls wait(), D calls notify(). If C is waiting already when D calls notify() everything is ok. If C did not manage to reach wait() before D calls notify(), you have a deadlock because the signal is lost if nobody is waiting on it and the status of the conditional variable is still "unset".
Be very careful about this.
You can easily re-implement Win32 API Event objects using POSIX mutexes and condition variables.
However some of the comments above make me state this:
A condition variable is not analogous to an Event object. A condition variable is fundamentally different from an Event in that it does not have memory or state, in the sense that if there isn't anyone blocked at the condition variable at the time you call pthread_cond_signal or pthread_cond_broadcast nothing will happen, in particular if a thread comes later to block via pthread_cond_wait it will block.
I'l try to sketch a quick auto-reset event implementation:
class event
{
public:
event(): signalled_ (false) {}
void signal ()
{
std::unique_lock<std::mutex> lock(mutex_);
signalled_ = true;
cond_.notify_one ();
}
void wait ()
{
std::unique_lock<std::mutex> lock(mutex_);
while (!signalled_)
cond_.wait (lock);
signalled_ = false;
}
protected:
std::mutex mutex_;
std::condition_variable cond_;
bool signalled_;
};
The example from Boost's Thread/Condition documentation is pretty similar to the normal ManualResetEvent and AutoResetEvent usage:
http://www.boost.org/doc/libs/1_53_0/doc/html/thread/synchronization.html#thread.synchronization.condvar_ref
(I've made some small edits for clarity)
boost::condition_variable cond;
boost::mutex mut;
bool data_ready;
void wait_for_data_to_process()
{
boost::unique_lock<boost::mutex> lock(mut);
while(!data_ready)
{
cond.wait(lock);
}
}
void prepare_data_for_processing()
{
{ //scope for lock_guard
boost::lock_guard<boost::mutex> lock(mut);
data_ready=true;
}
cond.notify_one();
}
Note that conditions provide the wait/notify mechanism of AutoResetEvent and ManualResetEvent but require a mutex to work.
Well, odds are it's most like a mutex -- you have a number of callers going for a shared resource, but only one is allowed in. In the mutex case, callers would try to get the mutex (e.g. phtread_mutex_lock), do their thing, then release (pthread_mutex_unlock) so that some other caller can then get in.
I know this may be a little late to the party and I have no information about the performance differences, but it might be a viable alternative to use a combination pthread_kill and sigwait, like so:
Declare the following, where appropriate:
int sigin;
sigset_t sigset;
initialize the previous variables in the following way:
sigemptyset(&sigset);
sigaddset(&sigset, SIGUSR1);
pthread_sigmask(SIG_BLOCK, &sigset, null);
in the waiting thread, call sigwait:
sigwait(&sigset, &sigin);
Then, on the thread that is supposed to wake the waiting thread, you could do this:
pthread_kill(p_handle, SIGUSR1);
where p_handle is the handle to the thread you wish to unblock.
This example blocks the waiting thread until SIGUSR1 is delivered. The signal only reaches that specific thread because of using pthread_kill.

Boolean Property Getter and Setter Locking

Is there any reason why you would create locks around the getter and setter of a boolean property like this?
private _lockObject = new object();
private bool _myFlag;
public bool MyFlag
{
get
{
lock (_lockObject)
{
return _myFlag;
}
}
set
{
lock (_lockObject)
{
_myFlag = value;
}
}
}
Well, you don't need locks necessarily - but if you want one thread to definitely read the value that another thread has written, you either need locks or a volatile variable.
I've personally given up trying to understand the precise meaning of volatile. I try to avoid writing my own lock-free code, instead relying on experts who really understand the memory model.
EDIT: As an example of the kind of problem this can cause, consider this code:
using System;
using System.Threading;
public class Test
{
private static bool stop = false;
private bool Stop
{
get { return stop; }
set { stop = value; }
}
private static void Main()
{
Thread t = new Thread(DoWork);
t.Start();
Thread.Sleep(1000); // Let it get started
Console.WriteLine("Setting stop flag");
Stop = true;
Console.WriteLine("Set");
t.Join();
}
private static void DoWork()
{
Console.WriteLine("Tight looping...");
while (!Stop)
{
}
Console.WriteLine("Done.");
}
}
That program may or may not terminate. I've seen both happen. There's no guarantee that the "reading" thread will actually read from main memory - it can put the initial value of stop into a register and just keep using that forever. I've seen that happen, in reality. It doesn't happen on my current machines, but it may do on my next.
Putting locks within the property getter/setter as per the code in the question would make this code correct and its behaviour predictable.
For more on this, see this blog post by Eric Lippert.
Reads and writes of bool are atomic.
However the name "flag" indicates that separate threads will be reading/writing until some condition occurred. To avoid unexpected behavior due to optimization you should consider adding the volatile keyword to you bool declaration.
There's no reason to have a lock right there.
Taking a lock may well be appropriate in your design, but it's very doubtful that this is the right granularity.
You need to make your design thread-safe, not individual properties (or even entire objects).

Is the following C# code thread safe?

I am trying to learn the threading in C#. Today I sow the following code at http://www.albahari.com/threading/:
class ThreadTest
{
bool done;
static void Main()
{
ThreadTest tt = new ThreadTest(); // Create a common instance
new Thread (tt.Go).Start();
tt.Go();
}
// Note that Go is now an instance method
void Go()
{
if (!done) { done = true; Console.WriteLine ("Done"); }
}
}
In Java unless you define the "done" as volatile the code will not be safe. How does C# memory model handles this?
Guys, Thanks all for the answers. Much appreciated.
Well, there's the clear race condition that they could both see done as false and execute the if body - that's true regardless of memory model. Making done volatile won't fix that, and it wouldn't fix it in Java either.
But yes, it's feasible that the change made in one thread could happen but not be visible until in the other thread. It depends on CPU architecture etc. As an example of what I mean, consider this program:
using System;
using System.Threading;
class Test
{
private bool stop = false;
static void Main()
{
new Test().Start();
}
void Start()
{
new Thread(ThreadJob).Start();
Thread.Sleep(500);
stop = true;
}
void ThreadJob()
{
int x = 0;
while (!stop)
{
x++;
}
Console.WriteLine("Counted to {0}", x);
}
}
While on my current laptop this does terminate, I've used other machines where pretty much the exact same code would run forever - it would never "see" the change to stop in the second thread.
Basically, I try to avoid writing lock-free code unless it's using higher-level abstractions provided by people who really know their stuff - like the Parallel Extensions in .NET 4.
There is a way to make this code lock-free and correct easily though, using Interlocked. For example:
class ThreadTest
{
int done;
static void Main()
{
ThreadTest tt = new ThreadTest(); // Create a common instance
new Thread (tt.Go).Start();
tt.Go();
}
// Note that Go is now an instance method
void Go()
{
if (Interlocked.CompareExchange(ref done, 1, 0) == 0)
{
Console.WriteLine("Done");
}
}
}
Here the change of value and the testing of it are performed as a single unit: CompareExchange will only set the value to 1 if it's currently 0, and will return the old value. So only a single thread will ever see a return value of 0.
Another thing to bear in mind: your question is fairly ambiguous, as you haven't defined what you mean by "thread safe". I've guessed at your intention, but you never made it clear. Read this blog post by Eric Lippert - it's well worth it.
No, it's not thread safe. You could potentially have one thread check the condition (if(!done)), the other thread check that same condition, and then the first thread executes the first line in the code block (done = true).
You can make it thread safe with a lock:
lock(this)
{
if(!done)
{
done = true;
Console.WriteLine("Done");
}
}
Even in Java with volatile, both threads could enter the block with the WriteLine.
If you want mutual exclusion you need to use a real synchronisation object such as a lock.
onle way this is thread safe is when you use atomic compare and set in the if test
if(atomicBool.compareAndSet(false,true)){
Console.WriteLine("Done");
}
You should do something like this:
class ThreadTest{
Object myLock = new Object();
...
void Go(){
lock(myLock){
if(!done)
{
done = true;
Console.WriteLine("Done");
}
}
}
The reason you want to use an generic object, rather than "this", is that if your object (aka "this") changes at all it is considered another object. Thus your lock does not work any more.
Another small thing you might consider is this. It is a "good practices" thing, so nothing severe.
class ThreadTest{
Object myLock = new Object();
...
void Go(){
lock(myLock){
if(!done)
{
done = true;
}
}
//This line of code does not belong inside the lock.
Console.WriteLine("Done");
}
Never have code inside a lock that does not need to be inside a lock. This is due to the delay this causes. If you have lots of threads you can gain a lot of performance from removing all this unnecessary waiting.
Hope it helps :)

Interlocked used to increment/mimick a boolean, is this safe?

I'm just wondering whether this code that a fellow developer (who has since left) is OK, I think he wanted to avoid putting a lock. Is there a performance difference between this and just using a straight forward lock?
private long m_LayoutSuspended = 0;
public void SuspendLayout()
{
Interlocked.Exchange(ref m_LayoutSuspended, 1);
}
public void ResumeLayout()
{
Interlocked.Exchange(ref m_LayoutSuspended, 0);
}
public bool IsLayoutSuspended
{
get { return Interlocked.Read(ref m_LayoutSuspended) != 1; }
}
I was thinking that something like that would be easier with a lock? It will indeed be used by multiple threads, hence why the use of locking/interlocked was decided.
Yes what you are doing is safe from a race point of view reaching the m_LayoutSuspended field, however, a lock is required for the following reason if the code does the following:
if (!o.IsLayoutSuspended) // This is not thread Safe .....
{
o.SuspendLayout(); // This is not thread Safe, because there's a difference between the checck and the actual write of the variable a race might occur.
...
o.ResumeLayout();
}
A safer way, that uses CompareExchange to make sure no race conditions have occurred:
private long m_LayoutSuspended = 0;
public bool SuspendLayout()
{
return Interlocked.CompareExchange(ref m_LayoutSuspended, 1) == 0;
}
if (o.SuspendLayout())
{
....
o.ResumeLayout();
}
Or better yet simply use a lock.
Personally I'd use a volatile Boolean:
private volatile bool m_LayoutSuspended = false;
public void SuspendLayout()
{
m_LayoutSuspended = true;
}
public void ResumeLayout()
{
m_LayoutSuspended = false;
}
public bool IsLayoutSuspended
{
get { return m_LayoutSuspended; }
}
Then again, as I've recently acknowledged elsewhere, volatile doesn't mean quite what I thought it did. I suspect this is okay though :)
Even if you stick with Interlocked, I'd change it to an int... there's no need to make 32 bit systems potentially struggle to make a 64 bit write atomic when they can do it easily with 32 bits...

Categories