C# 4 in a Nutshell (highly recommended btw) uses the following code to demonstrate the concept of MemoryBarrier (assuming A and B were run on different threads):
class Foo{
int _answer;
bool complete;
void A(){
_answer = 123;
Thread.MemoryBarrier(); // Barrier 1
_complete = true;
Thread.MemoryBarrier(); // Barrier 2
}
void B(){
Thread.MemoryBarrier(); // Barrier 3;
if(_complete){
Thread.MemoryBarrier(); // Barrier 4;
Console.WriteLine(_answer);
}
}
}
they mention that Barriers 1 & 4 prevent this example from writing 0 and Barriers 2 & 3 provide a freshness guarantee: they ensure that if B ran after A, reading _complete would evaluate to true.
I'm not really getting it. I think I understand why Barriers 1 & 4 are necessary: we don't want the write to _answer to be optimized and placed after the write to _complete (Barrier 1) and we need to make sure that _answer is not cached (Barrier 4). I also think I understand why Barrier 3 is necessary: if A ran until just after writing _complete = true, B would still need to refresh _complete to read the right value.
I don't understand though why we need Barrier 2! Part of me says that it's because perhaps Thread 2 (running B) already ran until (but not including) if(_complete) and so we need to insure that _complete is refreshed.
However, I don't see how this helps. Isn't it still possible that _complete will be set to true in A but yet the B method will see a cached (false) version of _complete? Ie, if Thread 2 ran method B until after the first MemoryBarrier and then Thread 1 ran method A until _complete = true but no further, and then Thread 1 resumed and tested if(_complete) -- could that if not result in false?
Barrier #2 guarentees that the write to _complete gets committed immediately. Otherwise it could remain in a queued state meaning that the read of _complete in B would not see the change caused by A even though B effectively used a volatile read.
Of course, this example does not quite do justice to the problem because A does nothing more after writing to _complete which means that the write will be comitted immediately anyway since the thread terminates early.
The answer to your question of whether the if could still evaluate to false is yes for exactly the reasons you stated. But, notice what the author says regarding this point.
Barriers 1 and 4 prevent this example
from writing “0”. Barriers 2 and 3
provide a freshness guarantee: they
ensure that if B ran after A, reading
_complete would evaluate to true.
The emphasis on "if B ran after A" is mine. It certainly could be the case that the two threads interleave. But, the author was ignoring this scenario presumably to make his point regarding how Thread.MemoryBarrier works simpler.
By the way, I had a hard time contriving an example on my machine where barriers #1 and #2 would have altered the behavior of the program. This is because the memory model regarding writes was strong in my environment. Perhaps, if I had a multiprocessor machine, was using Mono, or had some other different setup I could have demonstrated it. Of course, it was easy to demonstrate that removing barriers #3 and #4 had an impact.
The example is unclear for two reasons:
It is too simple to fully show what's happening with the fences.
Albahari is including requirements for non-x86 architectures. See MSDN: "MemoryBarrier is required only on multiprocessor systems with weak memory ordering (for example, a system employing multiple Intel Itanium processors [which Microsoft no longer supports]).".
If you consider the following, it becomes clearer:
A memory barrier (full barriers here - .Net doesn't provide a half barrier) prevents read / write instructions from jumping the fence (due to various optimisations). This guarantees us the code after the fence will execute after the code before the fence.
"This serializing operation guarantees that every load and store instruction that precedes in program order the MFENCE instruction is globally visible before any load or store instruction that follows the MFENCE instruction is globally visible." See here.
x86 CPUs have a strong memory model and guarantee writes appear consistent to all threads / cores (therefore barriers #2 & #3 are unneeded on x86). But, we are not guaranteed that reads and writes will remain in coded sequence, hence the need for barriers #1 and #4.
Memory barriers are inefficient and needn't be used (see the same MSDN article). I personally use Interlocked and volatile (make sure you know how to use it correctly!!), which work efficiently and are easy to understand.
Ps. This article explains the inner workings of x86 nicely.
Related
Loop hoisting a volatile read
I have read many places that a volatile variable can not be hoisted from a loop or if, but I cannot find this mentioned any places in the C# spec. Is this a hidden feature?
All writes are volatile in C#
Does this mean that all writes have the same properties without, as with the volatile keyword? Eg ordinary writes in C# has release semantics? and all writes flushes the store buffer of the processor?
Release semantics
Is this a formal way of saying that the store buffer of a processor is emptied when a volatile write is done?
Acquire semantics
Is this a formal way of saying that is should not load a variable into a register, but fetch it from memory every time?
In this article, Igoro speaks of "thread cache". I perfectly understand that this is imaginary, but is he in fact referring to:
Processor store buffer
loading variables into registers instead of fetching from memory every time
Some sort of processor cache (is this L1 and L2 etc)
Or is this just my imagination?
Delayed writing
I have read many places that writes can be delayed. Is this because of the reordering, and the store buffer?
Memory.Barrier
I understand that a side-effect is a call to "lock or" when JIT is transforming IL to asm, and this is why a Memory.Barrier can solve the delayed write to main memory (in the while loop) in fx this example:
static void Main()
{
bool complete = false;
var t = new Thread (() =>
{
bool toggle = false;
while (!complete) toggle = !toggle;
});
t.Start();
Thread.Sleep (1000);
complete = true;
t.Join(); // Blocks indefinitely
}
But is this always the case? Will a call to Memory.Barrier always flush the store buffer fetch updated values into the processor cache? I understand that the complete variable is not hoisted into a register and is fetched from a processor cache, every time, but the processor cache is updated because of the call to Memory.Barrier.
Am I on thin ice here, or have I some sort of understand of volatile and Memory.Barrier?
That's a mouthful..
I'm gonna start with a few of your questions, and update my answer.
Loop hoisting a volatile
I have read many places that a volatile variable can not be hoisted from a loop or if, but I cannot find this mentioned any places in the C# spec. Is this a hidden feature?
MSDN says "Fields that are declared volatile are not subject to compiler optimizations that assume access by a single thread". This is kind of a broad statement, but it includes hoisting or "lifting" variables out of a loop.
All writes are volatile in C#
Does this mean that all writes have the same properties without, as with the volatile keyword? Eg ordinary writes in C# has release semantics? and all writes flushes the store buffer of the processor?
Regular writes are not volatile. They do have release semantics, but they don't flush the CPU's write-buffer. At least, not according to the spec.
From Joe Duffy's CLR 2.0 Memory Model
Rule 2: All stores have release semantics, i.e. no load or store may move after one.
I've read a few articles stating that all writes are volatile in C# (like the one you linked to), but this is a common misconception. From the horse's mouth (The C# Memory Model in Theory and Practice, Part 2):
Consequently, the author might say something like, “In the .NET 2.0 memory model, all writes are volatile—even those to non-volatile fields.” (...) This behavior isn’t guaranteed by the ECMA C# spec, and, consequently, might not hold in future versions of the .NET Framework and on future architectures (and, in fact, does not hold in the .NET Framework 4.5 on ARM).
Release semantics
Is this a formal way of saying that the store buffer of a processor is emptied when a volatile write is done?
No, those are two different things. If an instruction has "release semantics", then no store/load instruction will ever be moved below said instruction. The definition says nothing regarding flushing the write-buffer. It only concerns instruction re-ordering.
Delayed writing
I have read many places that writes can be delayed. Is this because of the reordering, and the store buffer?
Yes. Write instructions can be delayed/reordered by either the compiler, the jitter or the CPU itself.
So a volatile write has two properties: release semantics, and store buffer flushing.
Sort of. I prefer to think of it this way:
The C# Specification of the volatile keyword guarantees one property: that reads have acquire-semantics and writes have release-semantics. This is done by emitting the necessary release/acquire fences.
The actual Microsoft's C# implementation adds another property: reads will be fresh, and writes will be flushed to memory immediately and be made visible to other processors. To accomplish this, the compiler emits an OpCodes.Volatile, and the jitter picks this up and tells the processor not to store this variable on its registers.
This means that a different C# implementation that doesn't guarantee immediacy will be a perfectly valid implementation.
Memory Barrier
bool complete = false;
var t = new Thread (() =>
{
bool toggle = false;
while (!complete) toggle = !toggle;
});
t.Start();
Thread.Sleep(1000);
complete = true;
t.Join(); // blocks
But is this always the case? Will a call to Memory.Barrier always flush the store buffer fetch updated values into the processor cache?
Here's a tip: try to abstract yourself away from concepts like flushing the store buffer, or reading straight from memory. The concept of a memory barrier (or a full-fence) is in no way related to the two former concepts.
A memory barrier has one sole purpose: ensure that store/load instructions below the fence are not moved above the fence, and vice-versa. If C#'s Thread.MemoryBarrier just so happens to flush pending writes, you should think about it as a side-effect, not the main intent.
Now, let's get to the point. The code you posted (which blocks when compiled in Release mode and ran without a debugger) could be solved by introducing a full fence anywhere inside the while block. Why? Let's first unroll the loop. Here's how the first few iterations would look like:
if(complete) return;
toggle = !toggle;
if(complete) return;
toggle = !toggle;
if(complete) return;
toggle = !toggle;
...
Because complete is not marked as volatile and there are no fences, the compiler and the cpu are allowed to move the read of the complete field.
In fact, the CLR's Memory Model (see rule 6) allows loads to be deleted (!) when coalescing adjacent loads. So, this could happen:
if(complete) return;
toggle = !toggle;
toggle = !toggle;
toggle = !toggle;
...
Notice that this is logically equivalent to hoisting the read out of the loop, and that's exactly what the compiler may do.
By introducing a full-fence either before or after toggle = !toggle, you'd prevent the compiler from moving the reads up and merging them together.
if(complete) return;
toggle = !toggle;
#FENCE
if(complete) return;
toggle = !toggle;
#FENCE
if(complete) return;
toggle = !toggle;
#FENCE
...
In conclusion, the key to solving these issues is ensuring that the instructions will be executed in the correct order. It has nothing to do with how long it takes for other processors to see one processor's writes.
In this threading online book: http://www.albahari.com/threading/part4.aspx
theres an example of Thread.MemoryBarrier()
class Foo
{
int _answer;
bool _complete;
void A()
{
_answer = 123;
Thread.MemoryBarrier(); // Barrier 1
_complete = true;
Thread.MemoryBarrier(); // Barrier 2
}
void B()
{
Thread.MemoryBarrier(); // Barrier 3
if (_complete)
{
Thread.MemoryBarrier(); // Barrier 4
Console.WriteLine (_answer);
}
}
}
We got a discussion whether there is any thread blocking going on or not?
Im thinking there is some, especially given that
A full fence takes around ten nanoseconds on a 2010-era desktop.
On other hand, full fence is only supposed to disable instructions reodering and caching which by its sound doesn't qualify as thread blocking, (unlike lock where its clear that thread waits for other to release lock before it continues, and is blocked during that time)
About that thread 'block state'. im talking not in terms of whether thread is put into blocked state or not, but whether there is some thread synchronization happening, which means one thread is not able to run while other isn't letting it to do so, by means of MemoryBarrier in this case.
Also Id like to get clear understanding what each barrier achieves. For example Barrier 2 - how exactly it provides freshness guarantee and how is it connected to barrier 3? If someone would explain in detail whats each barrier purpose here( what could possibly go wrong if 1 or 2 or 3 or 4 weren't there) I think id improve my understanding of this greatly.
EDIT: its mostly clear now what 1, 2, and 3 do. However what 4 does that 3 doesn't is still unclear.
The fact that instructions take time to execute does not imply that a thread is blocked. A thread is blocked when it is specifically put into a blocked state, which MemoryBarrier() does not do.
The processor instructions that actually prevent instruction reordering and cache flushing take time, because they must wait for the caches to become coherent again. During that time, the thread is still considered running.
Update: So let's take a look at what's actually happening in the example, and what each memory barrier actually does.
As the link says, 1 and 4 ensure that the correct answers are produced. That's because 1 ensures that the answers are flushed into memory, and 4 ensures that the read caches are flushed prior to retrieving the variables.
2 and 3 ensure that if A runs first, then B will always print the answers. Barrier 2 ensures that the write of true is flushed to memory, and barrier 3 ensures that the read cahces are flushed before testing _complete's value.
The cache and memory flushing should be clear enough, so let's look at instruction reordering. The way the compiler, CLR and CPU know they can reorder instructions is by analyzing a set of instructions in sequence. When they see the barrier instruction in the middle of a sequence, they know that instructions can't move across that boundary. That ensures that in addition to cache freshness, the instructions occur in the correct order.
I'm looking at the implementation of the VolatileRead/VolatileWrite methods (using Reflector), and i'm puzzled by something.
This is the implementation for VolatileRead:
[MethodImpl(MethodImplOptions.NoInlining)]
public static int VolatileRead(ref int address)
{
int num = address;
MemoryBarrier();
return num;
}
How come the memory barrier is placed after reading the value of "address"? dosen't it supposed to be the opposite? (place before reading the value, so any pending writes to "address" will be completed by the time we make the actual read.
The same thing goes to VolatileWrite, where the memory barrier is place before the assignment of the value. Why is that?
Also, why does these methods have the NoInlining attribute? what could happen if they were inlined?
I thought that until recently. Volatile reads aren't what you think they are - they're not about guaranteeing that they get the most recent value; they're about making sure that no read which is later in the program code is moved to before this read. That's what the spec guarantees - and likewise for volatile writes, it guarantees that no earlier write is moved to after the volatile one.
You're not alone in suspecting this code, but Joe Duffy explains it better than I can :)
My answer to this is to give up on lock-free coding other than by using things like PFX which are designed to insulate me from it. The memory model is just too hard for me - I'll leave it to the experts, and stick with things that I know are safe.
One day I'll update my threading article to reflect this, but I think I need to be able to discuss it more sensibly first...
(I don't know about the no-inlining part, btw. I suspect that inlining could introduce some other optimizations which aren't meant to happen around volatile reads/writes, but I could easily be wrong...)
Maybe I am oversimplifying, but I think the explanations about reordering and cache coherency and so on give too much details.
So, why the MemoryBarrier comes after the actual read?
I will try to explain this with an example that uses object instead of int.
One may think the correct is:
Thread 1 creates the object (initializes its inner data).
Thread 1 then puts the object into a variable.
Then it "does a fence" and all threads see the new value.
Then, the read is something like this:
Thread 2 "does a fence".
Thread 2 reads the object instance.
Thread 2 is sure that it has all the inner data of that instance (as it started with a fence).
The biggest problem with this is:
Thread 1 creates the object and initializes it.
Thread 1 then puts the object into a variable.
Before the Thread flushes the cache, the CPU itself flushes part of the cache... it commits only the address of the variable (not the contents of that variable).
At that moment, Thread 2 had already flushed its cache. So it is going to read everything from the main memory.
So, it reads the variable (it is there).
Then it reads the content (it is not there).
Finally, after all this, the CPU 1 executes the Thread 1 that does the fence.
So, what happens with the volatile write and read?
The volatile write makes the contents of the object go to the memory immediately (starts by the fence), then they set the variable (with may not go immediatelly to the real memory).
Then, the volatile read will first clear the cache. Then it reads the field. If it receives a value when reading the field, it is certain that the contents pointed by that reference are really there.
By those little things, yes, it is possible that you do a VolatileWrite(1) and another thread still see the value of zero. But as soon other threads see the value of 1 (using a volatile read), all other items needed that may be referenced are already there. You can't really tell it as when reading the old value (0 or null) you may simple not progress considering that you don't still have everything that you need.
I already saw some discussions that, even if that flushes the caches twice, the right pattern will be:
MemoryBarrier - will flush other variables changed before this call
Write
MemoryBarrier - will guarantee that the write was flushed
The Read will then need the same:
MemoryBarrier
Read - Guarantees that we see the latest info... maybe one that was put AFTER our memory barrier.
As something may have appeared after our MemoryBarrier and was already read, we must put another MemoryBarrier to access the contents.
Those could be two Write-Fences or two Read-Fences if that existed in .Net.
I am not sure on everything I said... that is a "compilation" of many information I got and it really explains why the VolatileRead and VolatileWrite appear to be reversed, but it also guarantees that no invalid values are read when using them.
I have a question about the following code sample (taken from: http://www.albahari.com/threading/part4.aspx#_NonBlockingSynch)
class Foo
{
int _answer;
bool _complete;
void A()
{
_answer = 123;
Thread.MemoryBarrier(); // Barrier 1
_complete = true;
Thread.MemoryBarrier(); // Barrier 2
}
void B()
{
Thread.MemoryBarrier(); // Barrier 3
if (_complete)
{
Thread.MemoryBarrier(); // Barrier 4
Console.WriteLine (_answer);
}
}
}
This is followed with the following explantion:
"Barriers 1 and 4 prevent this example from writing “0”. Barriers 2 and 3 provide a freshness guarantee: they ensure that if B ran after A, reading _complete would evaluate to true."
I understand how using the memory barriers affect the instruction reording, but what is this "freshness gurarantee" that is mentioned?
Later in the article, the following example is also used:
static void Main()
{
bool complete = false;
var t = new Thread (() =>
{
bool toggle = false;
while (!complete)
{
toggle = !toggle;
// adding a call to Thread.MemoryBarrier() here fixes the problem
}
});
t.Start();
Thread.Sleep (1000);
complete = true;
t.Join(); // Blocks indefinitely
}
This example is followed with this explanation:
"This program never terminates because the complete variable is cached in a CPU register. Inserting a call to Thread.MemoryBarrier inside the while-loop (or locking around reading complete) fixes the error."
So again ... what happens here?
In the first case, Barrier 1 ensures _answer is written BEFORE _complete. Regardless of how the code is written, or how the compiler or CLR instructs the CPU, the memory bus read/write queues can reorder the requests. The Barrier basically says "flush the queue before continuing". Similarly, Barrier 4 makes sure _answer is read AFTER _complete. Otherwise CPU2 could reorder things and see an old _answer with a "new" _complete.
Barriers 2 and 3 are, in some sense, useless. Note that the explanation contains the word "after": ie "... if B ran after A, ...". What's it mean for B to run after A? If B and A are on the same CPU, then sure, B can be after. But in that case, same CPU means no memory barrier problems.
So consider B and A running on different CPUs. Now, very much like Einstein's relativity, the concept of comparing times at different locations/CPUs doesn't really make sense.
Another way of thinking about it - can you write code that can tell whether B ran after A? If so, well you probably used memory barriers to do that. Otherwise, you can't tell, and it doesn't make sense to ask. It's also similar to Heisenburg's Principle - if you can observe it, you've modified the experiment.
But leaving physics aside, let's say you could open the hood of your machine, and see that the actually memory location of _complete was true (because A had run). Now run B. without Barrier 3, CPU2 might STILL NOT see _complete as true. ie not "fresh".
But you probably can't open your machine and look at _complete. Nor communicate your findings to B on CPU2. Your only communication is what the CPUs themselves are doing. So if they can't determine BEFORE/AFTER without barriers, asking "what happens to B if it runs after A, without barriers" makes no sense.
By the way, I'm not sure what you have available in C#, but what is typically done, and what is really needed for Code sample # 1 is a single release barrier on write, and a single acquire barrier on read:
void A()
{
_answer = 123;
WriteWithReleaseBarrier(_complete, true); // "publish" values
}
void B()
{
if (ReadWithAcquire(_complete)) // subscribe
{
Console.WriteLine (_answer);
}
}
The word "subscribe" isn't often used to describe the situation, but "publish" is. I suggest you read Herb Sutter's articles on threading.
This puts the barriers in exactly the right places.
For Code sample #2, this isn't really a memory barrier problem, it is a compiler optimization issue - it is keeping complete in a register. A memory barrier would force it out, as would volatile, but probably so would calling an external function - if the compiler can't tell whether that external function modified complete or not, it will re-read it from memory. ie maybe pass the address of complete to some function (defined somewhere where the compiler can't examine its details):
while (!complete)
{
some_external_function(&complete);
}
even if the function doesn't modify complete, if the compiler isn't sure, it will need to reload its registers.
ie the difference between code 1 and code 2 is that code 1 only has problems when A and B are running on separate threads. code 2 could have problems even on a single threaded machine.
Actually, the other question would be - can the compiler completely remove the while loop? If it thinks complete is unreachable by other code, why not? ie if it decided to move complete into a register, it might as well remove the loop completely.
EDIT: To answer the comment from opc (my answer is too big for comment block):
Barrier 3 forces the CPU to flush any pending read (and write) requests.
So imagine if there was some other reads before reading _complete:
void B {}
{
int x = a * b + c * d; // read a,b,c,d
Thread.MemoryBarrier(); // Barrier 3
if (_complete)
...
Without the barrier, the CPU might have all of these 5 read requests 'pending':
a,b,c,d,_complete
Without the barrier, the processor could reorder these requests to optimize memory access (ie if _complete and 'a' were on the same cache line or something).
With the barrier, the CPU gets a,b,c,d back from memory BEFORE _complete is even put in as a request. ENSURING 'b' (for example) is read BEFORE _complete - ie no reordering.
The question is - what difference does it make?
If a,b,c,d are independent from _complete, then it doesn't matter. All the barrier does is SLOW THINGS DOWN. So yeah, _complete is read later. So the data is fresher. Putting a sleep(100) or some busy-wait for-loop in there before the read would make it 'fresher' as well! :-)
So the point is - keep it relative. Does the data need to be read/written BEFORE/AFTER relative to some other data or not? That's the question.
And to not put down the author of the article - he does mention "if B ran after A...". It just isn't exactly clear whether he is imagining that B after A is crucial to the code, observable by to code, or just inconsequential.
Code sample #1:
Each processor core contains a cache with a copy of a portion of memory. It may take a bit of time for the cache to be updated. The memory barriers guarantee that the caches are synchronized with main memory. For example, if you didn't have barriers 2 and 3 here, consider this situation:
Processor 1 runs A(). It writes the new value of _complete to its cache (but not necessarily to main memory yet).
Processor 2 runs B(). It reads the value of _complete. If this value was previously in its cache, it may not be fresh (i.e., not synchronized with main memory), so it would not get the updated value.
Code sample #2:
Normally, variables are stored in memory. However, suppose a value is read multiple times in a single function: As an optimization, the compiler may decide to read it into a CPU register once, and then access the register each time it is needed. This is much faster, but prevents the function from detecting changes to the variable from another thread.
The memory barrier here forces the function to re-read the variable value from memory.
Calling Thread.MemoryBarrier() immediately refreshes the register caches with the actual values for variables.
In the first example, the "freshness" for _complete is provided by calling the method right after setting it and right before using it. In the second example, the initial false value for the variable complete will be cached in the thread's own space and needs to be resynchronized in order to immediately see the actual "outside" value from "inside" the running thread.
The "freshness" guarantee simply means that Barriers 2 and 3 force the values of _complete to be visible as soon as possible as opposed to whenever they happen to be written to memory.
It's actually unnecessary from a consistency point of view, since Barriers 1 and 4 ensure that answer will be read after reading complete.
Suppose I have a variable "counter", and there are several threads accessing and setting the value of "counter" by using Interlocked, i.e.:
int value = Interlocked.Increment(ref counter);
and
int value = Interlocked.Decrement(ref counter);
Can I assume that, the change made by Interlocked will be visible in all threads?
If not, what should I do to make all threads synchronize the variable?
EDIT: someone suggested me to use volatile. But when I set the "counter" as volatile, there is compiler warning "reference to volatile field will not be treated as volatile".
When I read online help, it said, "A volatile field should not normally be passed using a ref or out parameter".
InterlockedIncrement/Decrement on x86 CPUs (x86's lock add/dec) are automatically creating memory barrier which gives visibility to all threads (i.e., all threads can see its update as in-order, like sequential memory consistency). Memory barrier makes all pending memory loads/stores to be completed. volatile is not related to this question although C# and Java (and some C/C++ compilers) enforce volatile to make memory barrier. But, interlocked operation already has memory barrier by CPU.
Please also take a look my another answer in stackoverflow.
Note that I have assume that C#'s InterlockedIncrement/Decrement are intrinsic mapping to x86's lock add/dec.
Can I assume that, the change made by Interlocked will be visible in all threads?
This depends on how you read the value. If you "just" read it, then no, this won't always be visible in other threads unless you mark it as volatile. That causes an annoying warning though.
As an alternative (and much preferred IMO), read it using another Interlocked instruction. This will always see the updated value on all threads:
int readvalue = Interlocked.CompareExchange(ref counter, 0, 0);
which returns the value read, and if it was 0 swaps it with 0.
Motivation: the warning hints that something isn't right; combining the two techniques (volatile & interlocked) wasn't the intended way to do this.
Update: it seems that another approach to reliable 32-bit reads without using "volatile" is by using Thread.VolatileRead as suggested in this answer. There is also some evidence that I am completely wrong about using Interlocked for 32-bit reads, for example this Connect issue, though I wonder if the distinction is a bit pedantic in nature.
What I really mean is: don't use this answer as your only source; I'm having my doubts about this.
Actually, they aren't. If you want to safely modify counter, then you are doing the correct thing. But if you want to read counter directly you need to declare it as volatile. Otherwise, the compiler has no reason to believe that counter will change because the Interlocked operations are in code that it might not see.
Interlocked ensures that only 1 thread at a time can update the value. To ensure that other threads can read the correct value (and not a cached value) mark it as volatile.
public volatile int Counter;
No; an Interlocked-at-Write-Only alone does not ensure that variable reads in code are actually fresh; a program that does not correctly read from a field as well might not be Thread-Safe, even under a "strong memory model". This applies to any form of assigning to a field shared between threads.
Here is an example of code that will never terminate due to the JIT. (It was modified from Memory Barriers in .NET to be a runnable LINQPad program updated for the question).
// Run this as a LINQPad program in "Release Mode".
// ~ It will never terminate on .NET 4.5.2 / x64. ~
// The program will terminate in "Debug Mode" and may terminate
// in other CLR runtimes and architecture targets.
class X {
// Adding {volatile} would 'fix the problem', as it prevents the JIT
// optimization that results in the non-terminating code.
public int terminate = 0;
public int y;
public void Run() {
var r = new ManualResetEvent(false);
var t = new Thread(() => {
int x = 0;
r.Set();
// Using Volatile.Read or otherwise establishing
// an Acquire Barrier would disable the 'bad' optimization.
while(terminate == 0){x = x * 2;}
y = x;
});
t.Start();
r.WaitOne();
Interlocked.Increment(ref terminate);
t.Join();
Console.WriteLine("Done: " + y);
}
}
void Main()
{
new X().Run();
}
The explanation from Memory Barriers in .NET:
This time it is JIT, not the hardware. It’s clear that JIT has cached the value of the variable terminate [in the EAX register and the] program is now stuck in the loop highlighted above ..
Either using a lock or adding a Thread.MemoryBarrier inside the while loop will fix the problem. Or you can even use Volatile.Read [or a volatile field]. The purpose of the memory barrier here is only to suppress JIT optimizations. Now that we have seen how software and hardware can reorder memory operations, it’s time to discuss memory barriers ..
That is, an additional barrier construct is required on the read side to prevent issues with Compilation and JIT re-ordering / optimizations: this is a different issue than memory coherency!
Adding volatile here would prevent the JIT optimization, and thus 'fix the problem', even if such results in a warning. This program can also be corrected through the use of Volatile.Read or one of the various other operations that cause a barrier: these barriers are as much a part of the CLR/JIT program correctness as the underlying hardware memory fences.