In C++ I'm taught to use volatile keyword for variable (myVar) that is used from different threads even under critical section. But for C# I read in MSDN this strange phrase:
"The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access."
Does this phrase mean that if I'm under lock then do not need to use volatile keyword? If yes, then one more question: may be I must do lock on exect this variable (myVar)?
Object a = new Object();
double i,k;
Thread1()
{
lock(a)
{
i++;// using variable i.
k++;// using variable k.
}
}
Thread2 do the same.
Is it safe that i and k not volatile, or I must do like that?:
lock(i)
{
i++;// using variable i.
}
lock(k)
{
k++;// using variable k.
}
In C++ I'm taught to use volatile keyword for variable (myVar) that is used from different threads even under critical section
Whomever taught you this is not teaching you the whole story. Volatile in C++ makes no guarantees that reads or writes have acquire or release semantics! All volatile guarantees is that the compiler will not generate code that elides reads or does reads and writes out of order. Volatile alone is not enough to ensure correct semantics in multithreading unless your compiler makes some additional claim about what "volatile" means to it.
The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access." Does this phrase mean that if I'm under lock then do not need to use volatile keyword?
Correct. In C#, volatile does introduce acquire and release semantics by inserting the appropriate half fence. Since a lock introduces a full fence, volatile is unnecessary when reading a field in a lock.
may be I must do lock on exect this variable (myVar)?
All this code is so completely broken and wrong that it is impossible to answer the question. ++ is dangerous on doubles, making doubles volatile is not even legal in C#, and you can't lock on value types.
In standard C++ volatile has nothing to do with threads, although apparently Microsoft's compiler gives it some special meaning. For things like counters, use std::atomic<int>; no need for separate locks.
Correct, in C# under lock you do not need to use volatile because using lock guarantees all threads see the most up-to-date value.
I agree with you, this is not clear from the MSDN documentation: lock is made out to only provide mutual exclusive access to a block of code, but in addition it also has other thread-safety featues such as ensuring every thread sees the same value, this in inherent in lock because it uses memory barriers.
Your second question is not possible - you have to lock on a reference type - supposing you did however, in both cases you operation is "thread-safe" provided all other reads and writes to the variables lock on the same instance, usually a more granule object is better so you do not have to make other threads wait when they want to update something else and have to acquire the same lock, but you may know these vars are always accessed together in which case a shared lock would be more efficient.
Related
Joe Albahari has a great series on multithreading that's a must read and should be known by heart for anyone doing C# multithreading.
In part 4 however he mentions the problems with volatile:
Notice that applying volatile doesn’t prevent a write followed by a
read from being swapped, and this can create brainteasers. Joe Duffy
illustrates the problem well with the following example: if Test1 and
Test2 run simultaneously on different threads, it’s possible for a and
b to both end up with a value of 0 (despite the use of volatile on
both x and y)
Followed by a note that the MSDN documentation is incorrect:
The MSDN documentation states that use of the volatile keyword ensures
that the most up-to-date value is present in the field at all times.
This is incorrect, since as we’ve seen, a write followed by a read can
be reordered.
I've checked the MSDN documentation, which was last changed in 2015 but still lists:
The volatile keyword indicates that a field might be modified by
multiple threads that are executing at the same time. Fields that are
declared volatile are not subject to compiler optimizations that
assume access by a single thread. This ensures that the most
up-to-date value is present in the field at all times.
Right now I still avoid volatile in favor of the more verbose to prevent threads using stale data:
private int foo;
private object fooLock = new object();
public int Foo {
get { lock(fooLock) return foo; }
set { lock(fooLock) foo = value; }
}
As the parts about multithreading were written in 2011, is the argument still valid today? Should volatile still be avoided at all costs in favor of locks or full memory fences to prevent introducing very hard to produce bugs that as mentioned are even dependent on the CPU vendor it's running on?
Volatile in its current implementation is not broken despite popular blog posts claiming such a thing. It is however badly specified and the idea of using a modifier on a field to specify memory ordering is not that great (compare volatile in Java/C# to C++'s atomic specification that had enough time to learn from the earlier mistakes). The MSDN article on the other hand was clearly written by someone who has no business talking about concurrency and is completely bogus.. the only sane option is to completely ignore it.
Volatile guarantees acquire/release semantics when accessing the field and can only be applied to types that allow atomic reads and writes. Not more, not less. This is enough to be useful to implement many lock-free algorithms efficiently such as non-blocking hashmaps.
One very simple sample is using a volatile variable to publish data. Thanks to the volatile on x, the assertion in the following snippet cannot fire:
private int a;
private volatile bool x;
public void Publish()
{
a = 1;
x = true;
}
public void Read()
{
if (x)
{
// if we observe x == true, we will always see the preceding write to a
Debug.Assert(a == 1);
}
}
Volatile is not easy to use and in most situations you are much better off to go with some higher level concept, but when performance is important or you're implementing some low level data structures, volatile can be exceedingly useful.
As I read the MSDN documentation, I believe it is saying that if you see volatile on a variable, you do not have to worry about compiler optimizations screwing up the value because they reorder the operations. It doesn't say that you are protected from errors caused by your own code executing operations on separate threads in the wrong order. (although admittedly, the comment is not clear as to this.)
volatile is a very limited guarantee. It means that the variable isn't subject to compiler optimizations that assume access from a single thread. This means that if you write into a variable from one thread, then read it from another thread, the other thread will definitely have the latest value. Without volatile, one a multiprocessor machine without volatile, the compiler may make assumptions about single-threaded access, for example by keeping the value in a register, which prevents other processors from having access to the latest value.
As the code example you've mentioned shows, it doesn't protect you from having methods in different blocks reordered. In effect volatile makes each individual access to a volatile variable atomic. It doesn't make any guarantees as to the atomicity of groups of such accesses.
If you just want to ensure that your property has an up-to-date single value, you should be able to just use volatile.
The problem comes in if you try to perform multiple parallel operations as if they were atomic. If you have to force several operations to be atomic together, you need to lock the whole operation. Consider the example again, but using locks:
class DoLocksReallySaveYouHere
{
int x, y;
object xlock = new object(), ylock = new object();
void Test1() // Executed on one thread
{
lock(xlock) {x = 1;}
lock(ylock) {int a = y;}
...
}
void Test2() // Executed on another thread
{
lock(ylock) {y = 1;}
lock(xlock) {int b = x;}
...
}
}
The locks cause may cause some synchronization, which may prevent both a and b from having value 0 (I have not tested this). However, since both x and y are locked independently, either a or b can still non-deterministically end up with a value of 0.
So in the case of wrapping the modification of a single variable, you should be safe using volatile, and would not really be any safer using lock. If you need to atomically perform multiple operations, you need to use a lock around the entire atomic block, otherwise scheduling will still cause non-deterministic behavior.
Here are some useful disassemblies for volatile in C#: https://sharplab.io/#gist:625b1181356b543157780baf860c9173
On x86 it is just about:
using memory instead of registers
preventing compiler optimizations like in the case with the endless loop
I use volatile when I just want to tell compiler that a field might be updated from many different threads and I do not need additional features provided by interlocked operations.
I have read the threading manual and relevant MSDN pages and SO questions several times. Still, I do not completely understand if Volatile.Read/Write and interlocked operations apply only to the relevant variables, or all read/writes before/after that operations.
E.g., imagine I have an array and a counter.
long counter = 0;
var values = new double[1000000];
values[42] = 3.1415;
// Is this line needed instead of simple assignment above,
// or the implicit full-fence of Interlocked will guarantee that
// all threads will see the values[42] after interlocked increment?
//Volatile.Write(ref values[42], 3.1415);
Interlocked.Increment(ref counter);
Does interlocked increment guarantees the same result as if I used Volatile.Write(ref values[42], 3.1415); instead of values[42] = 3.1415;.
What if I have an array of reference types, e.g. some POCO, and set an instance fields before interlocked increment. Does the implicit full fence apply to all read/writes from that thread before it, or only to the counter?
I am implementing a scalable reader/writer scheme and I found the following statement in the Joe Duffy post:
If the variables protected are references to heap objects, you need to worry about using the read protection each time you touch a field. Just like locks, this technique doesn’t compose. As with anything other than simple locking, use this technique with great care and caution; although the built-in acquire and release fences shield you from memory model reordering issues, there are some easy traps you can fall into.
Is this just a general statement to discourage using low-lock constructs, or somehow applies to the example above?
What you are probably missing is an understanding of fences. This is the best resource to read up on them: http://www.albahari.com/threading/part4.aspx
The short answer is Interlocked.Increment issues a full fence which is independent of the variable it is updating. I believe Volatile.Write issues a half fence. A half fence can be constructed from Thread.MemoryBarrier. When we say Interlocked.Increment issues a full fence it means that Thread.MemoryBarrier is called before and after the operation. Volatile.Write calls Thread.MemoryBarrier before the write and Volatile.Read after. The fences determine when memory access can be reordered (and it's not variable specific as Thread.MemoryBarrier is parameterless).
I have the following Lock statement:
private readonly object ownerLock_ = new object();
lock (ownerLock_)
{
}
Should I use volatile keyword for my lock variable?
private readonly volatile object ownerLock_ = new object();
On MSDN I saw that it usually used for a field that is accessed without locking, so if I use Lock I don't need to use volatile?
From MSDN:
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock statement to serialize access.
If you're only ever accessing the data that the lock "guards" while you own the lock, then yes - making those fields volatile is superfluous. You don't need to make the ownerLock_ variable volatile either. (You haven't currently shown any actual code within the lock statement, which makes it hard to talk about in concrete terms - but I'm assuming you'd actually be reading/modifying some data within the lock statement.)
volatile should be very rarely used in application code. If you want lock-free access to a single variable, Interlocked is almost always simpler to reason about. If you want lock-free access beyond that, I would almost always start locking. (Or try to use immutable data structures to start with.)
I'd only expect to see volatile within code which is trying to build higher level abstractions for threading - so within the TPL codebase, for example. It's really a tool for experts who really understand the .NET memory model thoroughly... of whom there are very few, IMO.
If something is readonly it's thread-safe, period. (Well, almost. An expert might be able to figure out how to get a NullReferenceException on your lock statement, but it wouldn't be easy.) With readonly you don't need volatile, Interlocked, or locking. It's the ideal keyword for multi-threading, and you should use it where ever you can. It works great for a lock object where its big disadvantage (you can't change the value) doesn't matter.
Also, while the reference is immutable, the object referenced may not be. "new object()" is here, but if it was a List or something else mutable--and not thread-safe--you would want to lock the reference (and all other references to it, if any) to keep the object from changing in two threads at once.
I have a question related to the C# memory model and threads. I am not sure if the following code is correct without the volatile keyword.
public class A {
private int variableA = 0;
public A() {
variableA = 1;
Thread B = new Thread(new ThreadStart(() => printA())).Start();
}
private void printA() {
System.Console.WriteLine(variableA);
}
}
My concern is if it is guaranteed that the Thread B will see variableA with value 1 without using volatile? In the main thread I am only assigning 1 to variableA in the constructor. After that I am not touching variableA, it is used only in the Thread B, so locking is probably not necessary.
But, is it guaranteed that the main thread will flush his cache and write the variableA contents to the main memory, so the second thread can read the newly assigned value?
Additionally, is it guaranteed that the second thread will read the variableA contents from the main memory? May some compiler optimizations occur and the Thread B can read the variableA contents from the cache instead of the main memory? It may happen when the order of the instructions is changed.
For sure, adding volatile to the variableA declaration will make the code correct. But, is it neccessary? I am asking because I wrote some code with some non volatile variables initialization in the constructor, and the variables are used later by some Timer threads, and I am not sure if it is totally correct.
What about the same code in Java?
Thanks, Michal
There are a lot of places where implicit memory barriers are created. This is one of them. Starting threads create full barriers. So the write to variableA will get committed before the thread starts and the first reads will be acquired from main memory. Of course, in Microsoft's implementation of the CLR that is somewhat of a moot point because writes already have volatile semantics. But the same guarentee is not made in the ECMA specification so it is theorectically possible that the Mono implemenation could behave differently in this regard.
My concern is if it is guaranteed that
the Thread B will see variableA with
value 1 without using volatile?
In this case...yes. However, if you continue to use variableA in the second thread there is no guarentee after the first read that it will see updates.
But, is it guaranteed that the main
thread will flush his cache and write
the variableA contents to the main
memory, so the second thread can read
the newly assigned value?
Yes.
Additionally, is it guaranteed that
the second thread will read the
variableA contents from the main
memory?
Yes, but only on the first read.
For sure, adding volatile to the
variableA declaration will make the
code correct. But, is it neccessary?
In this very specific and narrow case...no. But, in general it is advised that you use the volatile keyword in these scenarios. Not only will it make your code thread-safe as the scenario gets more complicated, but it also helps to document the fact that the field is going to be used by more than one thread and that you have considered the implications of using a lock-free strategy.
The same code in Java is definitely okay - the creation of a new thread acts as a sort of barrier, effectively. (All actions earlier in the program text than the thread creation "happen before" the new thread starts.)
I don't know what's guaranteed in .NET with respect to new thread creation, however. Even more worrying is the possibility of a delayed read when using Control.BeginInvoke and the like... I haven't seen any guarantees around memory barriers for those situations.
To be honest, I suspect it's fine. I suspect that anything which needs to coordinate between threads like this (either creating a new one or marshalling a call onto an existing one) will use a full memory barrier on both of the threads involved. However, you're absolutely right to be concerned, and I'm hoping that you'll get a more definitive answer from someone smarter than me. You might want to email Joe Duffy to get his point of view on this...
But, is it guaranteed that the main thread will flush his cache and write the variableA contents to the main memory,
Yes, this is guaranteed by the MS CLR memory model. Not necessarily so for other implementations of the CLI (ie, I'm not sure about Mono). The ECMA standard does not require it.
so the second thread can read the newly assigned value?
That requires that the cache has been refreshed. It is probably guaranteed by the creation of the Thread (like Jon Skeet said). It is however not guaranteed by the previous point. The cache is flushed on each write but not on each read.
You could make very sure by using VolatileRead(ref variableA) but it is recommended (Jeffrey Richter) to use the Interlocked class. Note that VolatileWrite() is superfluous in MS.NET.
I've been raised to believe that if multiple threads can access a variable, then all reads from and writes to that variable must be protected by synchronization code, such as a "lock" statement, because the processor might switch to another thread halfway through a write.
However, I was looking through System.Web.Security.Membership using Reflector and found code like this:
public static class Membership
{
private static bool s_Initialized = false;
private static object s_lock = new object();
private static MembershipProvider s_Provider;
public static MembershipProvider Provider
{
get
{
Initialize();
return s_Provider;
}
}
private static void Initialize()
{
if (s_Initialized)
return;
lock(s_lock)
{
if (s_Initialized)
return;
// Perform initialization...
s_Initialized = true;
}
}
}
Why is the s_Initialized field read outside of the lock? Couldn't another thread be trying to write to it at the same time? Are reads and writes of variables atomic?
For the definitive answer go to the spec. :)
Partition I, Section 12.6.6 of the CLI spec states: "A conforming CLI shall guarantee that read and write access to properly aligned memory locations no larger than the native word size is atomic when all the write accesses to a location are the same size."
So that confirms that s_Initialized will never be unstable, and that read and writes to primitve types smaller than 32 bits are atomic.
In particular, double and long (Int64 and UInt64) are not guaranteed to be atomic on a 32-bit platform. You can use the methods on the Interlocked class to protect these.
Additionally, while reads and writes are atomic, there is a race condition with addition, subtraction, and incrementing and decrementing primitive types, since they must be read, operated on, and rewritten. The interlocked class allows you to protect these using the CompareExchange and Increment methods.
Interlocking creates a memory barrier to prevent the processor from reordering reads and writes. The lock creates the only required barrier in this example.
This is a (bad) form of the double check locking pattern which is not thread safe in C#!
There is one big problem in this code:
s_Initialized is not volatile. That means that writes in the initialization code can move after s_Initialized is set to true and other threads can see uninitialized code even if s_Initialized is true for them. This doesn't apply to Microsoft's implementation of the Framework because every write is a volatile write.
But also in Microsoft's implementation, reads of the uninitialized data can be reordered (i.e. prefetched by the cpu), so if s_Initialized is true, reading the data that should be initialized can result in reading old, uninitialized data because of cache-hits (ie. the reads are reordered).
For example:
Thread 1 reads s_Provider (which is null)
Thread 2 initializes the data
Thread 2 sets s\_Initialized to true
Thread 1 reads s\_Initialized (which is true now)
Thread 1 uses the previously read Provider and gets a NullReferenceException
Moving the read of s_Provider before the read of s_Initialized is perfectly legal because there is no volatile read anywhere.
If s_Initialized would be volatile, the read of s_Provider would not be allowed to move before the read of s_Initialized and also the initialization of the Provider is not allowed to move after s_Initialized is set to true and everything is ok now.
Joe Duffy also wrote an Article about this problem: Broken variants on double-checked locking
Hang about -- the question that is in the title is definitely not the real question that Rory is asking.
The titular question has the simple answer of "No" -- but this is no help at all, when you see the real question -- which i don't think anyone has given a simple answer to.
The real question Rory asks is presented much later and is more pertinent to the example he gives.
Why is the s_Initialized field read
outside of the lock?
The answer to this is also simple, though completely unrelated to the atomicity of variable access.
The s_Initialized field is read outside of the lock because locks are expensive.
Since the s_Initialized field is essentially "write once" it will never return a false positive.
It's economical to read it outside the lock.
This is a low cost activity with a high chance of having a benefit.
That's why it's read outside of the lock -- to avoid paying the cost of using a lock unless it's indicated.
If locks were cheap the code would be simpler, and omit that first check.
(edit: nice response from rory follows. Yeh, boolean reads are very much atomic. If someone built a processor with non-atomic boolean reads, they'd be featured on the DailyWTF.)
The correct answer seems to be, "Yes, mostly."
John's answer referencing the CLI spec indicates that accesses to variables not larger than 32 bits on a 32-bit processor are atomic.
Further confirmation from the C# spec, section 5.5, Atomicity of variable references:
Reads and writes of the following data types are atomic: bool, char,
byte, sbyte, short, ushort, uint, int, float, and reference types. In
addition, reads and writes of enum types with an underlying type in
the previous list are also atomic. Reads and writes of other types,
including long, ulong, double, and decimal, as well as user-defined
types, are not guaranteed to be atomic.
The code in my example was paraphrased from the Membership class, as written by the ASP.NET team themselves, so it was always safe to assume that the way it accesses the s_Initialized field is correct. Now we know why.
Edit: As Thomas Danecker points out, even though the access of the field is atomic, s_Initialized should really be marked volatile to make sure that the locking isn't broken by the processor reordering the reads and writes.
The Initialize function is faulty. It should look more like this:
private static void Initialize()
{
if(s_initialized)
return;
lock(s_lock)
{
if(s_Initialized)
return;
s_Initialized = true;
}
}
Without the second check inside the lock it's possible the initialisation code will be executed twice. So the first check is for performance to save you taking a lock unnecessarily, and the second check is for the case where a thread is executing the initialisation code but hasn't yet set the s_Initialized flag and so a second thread would pass the first check and be waiting at the lock.
Reads and writes of variables are not atomic. You need to use Synchronisation APIs to emulate atomic reads/writes.
For an awesome reference on this and many more issues to do with concurrency, make sure you grab a copy of Joe Duffy's latest spectacle. It's a ripper!
"Is accessing a variable in C# an atomic operation?"
Nope. And it's not a C# thing, nor is it even a .net thing, it's a processor thing.
OJ is spot on that Joe Duffy is the guy to go to for this kind of info. ANd "interlocked" is a great search term to use if you're wanting to know more.
"Torn reads" can occur on any value whose fields add up to more than the size of a pointer.
An If (itisso) { check on a boolean is atomic, but even if it was not
there is no need to lock the first check.
If any thread has completed the Initialization then it will be true. It does not matter if several threads are checking at once. They will all get the same answer, and, there will be no conflict.
The second check inside the lock is necessary because another thread may have grabbed the lock first and completed the initialization process already.
You could also decorate s_Initialized with the volatile keyword and forego the use of lock entirely.
That is not correct. You will still encounter the problem of a second thread passing the check before the first thread has had a chance to to set the flag which will result in multiple executions of the initialisation code.
I think you're asking if s_Initialized could be in an unstable state when read outside the lock. The short answer is no. A simple assignment/read will boil down to a single assembly instruction which is atomic on every processor I can think of.
I'm not sure what the case is for assignment to 64 bit variables, it depends on the processor, I would assume that it is not atomic but it probably is on modern 32 bit processors and certainly on all 64 bit processors. Assignment of complex value types will not be atomic.
I thought they were - I'm not sure of the point of the lock in your example unless you're also doing something to s_Provider at the same time - then the lock would ensure that these calls happened together.
Does that //Perform initialization comment cover creating s_Provider? For instance
private static void Initialize()
{
if (s_Initialized)
return;
lock(s_lock)
{
s_Provider = new MembershipProvider ( ... )
s_Initialized = true;
}
}
Otherwise that static property-get's just going to return null anyway.
Perhaps Interlocked gives a clue. And otherwise this one i pretty good.
I would have guessed that their not atomic.
To make your code always work on weakly ordered architectures, you must put a MemoryBarrier before you write s_Initialized.
s_Provider = new MemershipProvider;
// MUST PUT BARRIER HERE to make sure the memory writes from the assignment
// and the constructor have been wriitten to memory
// BEFORE the write to s_Initialized!
Thread.MemoryBarrier();
// Now that we've guaranteed that the writes above
// will be globally first, set the flag
s_Initialized = true;
The memory writes that happen in the MembershipProvider constructor and the write to s_Provider are not guaranteed to happen before you write to s_Initialized on a weakly ordered processor.
A lot of thought in this thread is about whether something is atomic or not. That is not the issue. The issue is the order that your thread's writes are visible to other threads. On weakly ordered architectures, writes to memory do not occur in order and THAT is the real issue, not whether a variable fits within the data bus.
EDIT: Actually, I'm mixing platforms in my statements. In C# the CLR spec requires that writes are globally visible, in-order (by using expensive store instructions for every store if necessary). Therefore, you don't need to actually have that memory barrier there. However, if it were C or C++ where no such guarantee of global visibility order exists, and your target platform may have weakly ordered memory, and it is multithreaded, then you would need to ensure that the constructors writes are globally visible before you update s_Initialized, which is tested outside the lock.
What you're asking is whether accessing a field in a method multiple times atomic -- to which the answer is no.
In the example above, the initialise routine is faulty as it may result in multiple initialization. You would need to check the s_Initialized flag inside the lock as well as outside, to prevent a race condition in which multiple threads read the s_Initialized flag before any of them actually does the initialisation code. E.g.,
private static void Initialize()
{
if (s_Initialized)
return;
lock(s_lock)
{
if (s_Initialized)
return;
s_Provider = new MembershipProvider ( ... )
s_Initialized = true;
}
}
Ack, nevermind... as pointed out, this is indeed incorrect. It doesn't prevent a second thread from entering the "initialize" code section. Bah.
You could also decorate s_Initialized with the volatile keyword and forego the use of lock entirely.