Recently I've peeked into the Linux kernel implementation of an atomic read and write and a few questions came up.
First the relevant code from the ia64 architecture:
typedef struct {
int counter;
} atomic_t;
#define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic64_read(v) (*(volatile long *)&(v)->counter)
#define atomic_set(v,i) (((v)->counter) = (i))
#define atomic64_set(v,i) (((v)->counter) = (i))
For both read and write operations, it seems that the direct approach was taken to read from or write to the variable. Unless there is another trick somewhere, I do not understand what guarantees exist that this operation will be atomic in the assembly domain. I guess an obvious answer will be that such an operation translates to one assembly opcode, but even so, how is that guaranteed when taking into account the different memory cache levels (or other optimizations)?
On the read macros, the volatile type is used in a casting trick. Anyone has a clue how this affects the atomicity here? (Note that it is not used in the write operation)
I think you are misunderstanding the (very much vague) usage of the word "atomic" and "volatile" here. Atomic only really means that the words will be read or written atomically (in one step, and guaranteeing that the contents of this memory position will always be one write or the other, and not something in between). And the volatile keyword tells the compiler to never assume the data in that location due to an earlier read/write (basically, never optimize away the read).
What the words "atomic" and "volatile" do NOT mean here is that there's any form of memory synchronization. Neither implies ANY read/write barriers or fences. Nothing is guaranteed with regards to memory and cache coherence. These functions are basically atomic only at the software level, and the hardware can optimize/lie however it deems fit.
Now as to why simply reading is enough: the memory models for each architecture are different. Many architectures can guarantee atomic reads or writes for data aligned to a certain byte offset, or x words in length, etc. and vary from CPU to CPU. The Linux kernel contains many defines for the different architectures that let it do without any atomic calls (CMPXCHG, basically) on platforms that guarantee (sometimes even only in practice even if in reality their spec says the don't actually guarantee) atomic reads/writes.
As for the volatile, while there is no need for it in general unless you're accessing memory-mapped IO, it all depends on when/where/why the atomic_read and atomic_write macros are being called. Many compilers will (though it is not set in the C spec) generate memory barriers/fences for volatile variables (GCC, off the top of my head, is one. MSVC does for sure.). While this would normally mean that all reads/writes to this variable are now officially exempt from just about any compiler optimizations, in this case by creating a "virtual" volatile variable only this particular instance of a read/write is off-limits for optimization and re-ordering.
The reads are atomic on most major architectures, so long as they are aligned to a multiple of their size (and aren't bigger than the read size of a give type), see the Intel Architecture manuals. Writes on the other hand many be different, Intel states that under x86, single byte write and aligned writes may be atomic, under IPF (IA64), everything use acquire and release semantics, which would make it guaranteed atomic, see this.
the volatile prevents the compiler from caching the value locally, forcing it to be retrieve where ever there is access to it.
If you write for a specific architecture, you can make assumptions specific to it.
I guess IA-64 does compile these things to a single instruction.
The cache shouldn't be an issue, unless the counter crosses a cache line boundry. But if 4/8 byte alignment is required, this can't happen.
A "real" atomic instruction is required when a machine instruction translates into two memory accesses. This is the case for increments (read, increment, write) or compare&swap.
volatile affects the optimizations the compiler can do.
For example, it prevents the compiler from converting multiple reads into one read.
But on the machine instruction level, it does nothing.
Related
https://www.gnu.org/software/libc/manual/html_node/Atomic-Types.html#Atomic-Types says - In practice, you can assume that int is atomic. You can also assume that pointer types are atomic; that is very convenient. Both of these assumptions are true on all of the machines that the GNU C Library supports and on all POSIX systems we know of.
My question is whether pointer assignment can be considered atomic on x86_64 architecture for a C program compiled with gcc m64 flag. OS is 64bit Linux and CPU is Intel(R) Xeon(R) CPU D-1548. One thread will be setting a pointer and another thread accessing the pointer. There is only one writer thread and one reader thread. Reader should either be getting the previous value of the pointer or the latest value and no garbage value in between.
If it is not considered atomic, please let me know how can I use the gcc atomic builtins or maybe memory barrier like __sync_synchronize to achieve the same without using locks. Interested only in C solution and not C++. Thanks!
Bear in mind that atomicity alone is not enough for communicating between threads. Nothing prevents the compiler and CPU from reordering previous/subsequent load and store instructions with that "atomic" store. In old days people used volatile to prevent that reordering but that was never intended for use with threads and doesn't provide means to specify less or more restrictive memory order (see "Relationship with volatile" in there).
You should use C11 atomics because they guarantee both atomicity and memory order.
For almost all architectures, pointer load and store are atomic. A once notable exception was 8086/80286 where pointers could be seg:offset; there was an l[des]s instruction which could make an atomic load; but no corresponding atomic store.
The integrity of the pointer is only a small concern; your bigger issue revolves around synchronization: the pointer was at value Y, you set it to X; how will you know when nobody is using the (old) Y value?
A somewhat related problem is that you may have stored things at X, which the other thread expects to find. Without synchronization, other might see the new pointer value, however what it points to might not be up to date yet.
A plain global char *ptr should not be considered atomic. It might work sometimes, especially with optimization disabled, but you can get the compiler to make safe and efficient optimized asm by using modern language features to tell it you want atomicity.
Use C11 stdatomic.h or GNU C __atomic builtins. And see Why is integer assignment on a naturally aligned variable atomic on x86? - yes the underlying asm operations are atomic "for free", but you need to control the compiler's code-gen to get sane behaviour for multithreading.
See also LWN: Who's afraid of a big bad optimizing compiler? - weird effects of using plain vars include several really bad well-known things, but also more obscure stuff like invented loads, reading a variable more than once if the compiler decides to optimize away a local tmp and load the shared var twice, instead of loading it into a register. Using asm("" ::: "memory") compiler barriers may not be sufficient to defeat that depending on where you put them.
So use proper atomic stores and loads that tell the compiler what you want: You should generally use atomic loads to read them, too.
#include <stdatomic.h> // C11 way
_Atomic char *c11_shared_var; // all access to this is atomic, functions needed only if you want weaker ordering
void foo(){
atomic_store_explicit(&c11_shared_var, newval, memory_order_relaxed);
}
char *plain_shared_var; // GNU C
// This is a plain C var. Only specific accesses to it are atomic; be careful!
void foo() {
__atomic_store_n(&plain_shared_var, newval, __ATOMIC_RELAXED);
}
Using __atomic_store_n on a plain var is the functionality that C++20 atomic_ref exposes. If multiple threads access a variable for the entire time that it needs to exist, you might as well just use C11 stdatomic because every access needs to be atomic (not optimized into a register or whatever). When you want to let the compiler load once and reuse that value, do char *tmp = c11_shared_var; (or atomic_load_explicit if you only want acquire instead of seq_cst; cheaper on a few non-x86 ISAs).
Besides lack of tearing (atomicity of asm load or store), the other key parts of _Atomic foo * are:
The compiler will assume that other threads may have changed memory contents (like volatile effectively implies), otherwise the assumption of no data-race UB will let the compiler hoist loads out of loops. Without this, dead-store elimination might only do one store at the end of a loop, not updating the value multiple times.
The read side of the problem is usually what bites people in practice, see Multithreading program stuck in optimized mode but runs normally in -O0 - e.g. while(!flag){} becomes if(!flag) infinite_loop; with optimization enabled.
Ordering wrt. other code. e.g. you can use memory_order_release to make sure that other threads that see the pointer update also see all changes to the pointed-to data. (On x86 that's as simple as compile-time ordering, no extra barriers needed for acquire/release, only for seq_cst. Avoid seq_cst if you can; mfence or locked operations are slow.)
Guarantee that the store will compile to a single asm instruction. You'd be depending on this. It does happen in practice with sane compilers, although it's conceivable that a compiler might decide to use rep movsb to copy a few contiguous pointers, and that some machine somewhere might have a microcoded implementation that does some stores narrower than 8 bytes.
(This failure mode is highly unlikely; the Linux kernel relies on volatile load/store compiling to a single instruction with GCC / clang for its hand-rolled intrinsics. But if you just used asm("" ::: "memory") to make sure a store happened on a non-volatile variable, there's a chance.)
Also, something like ptr++ will compile to an atomic RMW operation like lock add qword [mem], 4, rather than separate load and store like volatile would. (See Can num++ be atomic for 'int num'? for more about atomic RMWs). Avoid that if you don't need it, it's slower. e.g. atomic_store_explicit(&ptr, ptr + 1, mo_release); - seq_cst loads are cheap on x86-64 but seq_cst stores aren't.
Also note that memory barriers can't create atomicity (lack of tearing), they can only create ordering wrt other ops.
In practice x86-64 ABIs do have alignof(void*) = 8 so all pointer objects should be naturally aligned (except in a __attribute__((packed)) struct which violates the ABI, so you can use __atomic_store_n on them. It should compile to what you want (plain store, no overhead), and meet the asm requirements to be atomic.
See also When to use volatile with multi threading? - you can roll your own atomics with volatile and asm memory barriers, but don't. The Linux kernel does that, but it's a lot of effort for basically no gain, especially for a user-space program.
Side note: an often repeated misconception is that volatile or _Atomic are needed to avoid reading stale values from cache. This is not the case.
All machines that run C11 threads across multiple cores have coherent caches, not needing explicit flush instructions in the reader or writer. Just ordinary load or store instructions, like x86 mov. The key is to not let the compiler keep values of shared variable in CPU registers (which are thread-private). It normally can do this optimization because of the assumption of no data-race Undefined Behaviour. Registers are very much not the same thing as L1d CPU cache; managing what's in registers vs. memory is done by the compiler, while hardware keeps cache in sync. See When to use volatile with multi threading? for more details about why coherent caches is sufficient to make volatile work like memory_order_relaxed.
See Multithreading program stuck in optimized mode but runs normally in -O0 for an example.
"Atomic" is treated as this quantum state where something can be both atomic and not atomic at the same time because "it's possible" that "some machines" "somewhere" "might not" write "a certain value" atomically. Maybe.
That is not the case. Atomicity has a very specific meaning, and it solves a very specific problem: threads being pre-empted by the OS to schedule another thread in its place on that core. And you cannot stop a thread from executing mid-assembly instruction.
What that means is that any single assembly instruction is "atomic" by definition. And since you have registry moving instructions, any register-sized copy is atomic by definition. That means a 32-bit integer on a 32-bit CPU, and a 64-bit integer on a 64-bit CPU are all atomic -- and of course that includes pointers (ignore all the people who will tell you "some architectures" have pointers of "different size" than registers, that hasn't been the case since 386).
You should however be careful not to hit variable caching problems (ie one thread writing a pointer, and another trying to read it but getting an old value from the cache), use volatile as needed to prevent this.
If there are two threads accessing a global variable then many tutorials say make the variable volatile to prevent the compiler caching the variable in a register and it thus not getting updated correctly.
However two threads both accessing a shared variable is something which calls for protection via a mutex isn't it?
But in that case, between the thread locking and releasing the mutex the code is in a critical section where only that one thread can access the variable, in which case the variable doesn't need to be volatile?
So therefore what is the use/purpose of volatile in a multi-threaded program?
Short & quick answer: volatile is (nearly) useless for platform-agnostic, multithreaded application programming. It does not provide any synchronization, it does not create memory fences, nor does it ensure the order of execution of operations. It does not make operations atomic. It does not make your code magically thread safe. volatile may be the single-most misunderstood facility in all of C++. See this, this and this for more information about volatile
On the other hand, volatile does have some use that may not be so obvious. It can be used much in the same way one would use const to help the compiler show you where you might be making a mistake in accessing some shared resource in a non-protected way. This use is discussed by Alexandrescu in this article. However, this is basically using the C++ type system in a way that is often viewed as a contrivance and can evoke Undefined Behavior.
volatile was specifically intended to be used when interfacing with memory-mapped hardware, signal handlers, and the setjmp machine code instruction. This makes volatile directly applicable to systems-level programming rather than normal applications-level programming.
The 2003 C++ Standard does not say that volatile applies any kind of Acquire or Release semantics on variables. In fact, the Standard is completely silent on all matters of multithreading. However, specific platforms do apply Acquire and Release semantics on volatile variables.
[Update for C++11]
The C++11 Standard now does acknowledge multithreading directly in the memory model and the language, and it provides library facilities to deal with it in a platform-independent way. However the semantics of volatile still have not changed. volatile is still not a synchronization mechanism. Bjarne Stroustrup says as much in TCPPPL4E:
Do not use volatile except in low-level code that deals directly
with hardware.
Do not assume volatile has special meaning in the memory model. It
does not. It is not -- as in some later languages -- a
synchronization mechanism. To get synchronization, use atomic, a
mutex, or a condition_variable.
[/End update]
The above all applies to the C++ language itself, as defined by the 2003 Standard (and now the 2011 Standard). Some specific platforms however do add additional functionality or restrictions to what volatile does. For example, in MSVC 2010 (at least) Acquire and Release semantics do apply to certain operations on volatile variables. From the MSDN:
When optimizing, the compiler must maintain ordering among references
to volatile objects as well as references to other global objects. In
particular,
A write to a volatile object (volatile write) has Release semantics; a
reference to a global or static object that occurs before a write to a
volatile object in the instruction sequence will occur before that
volatile write in the compiled binary.
A read of a volatile object (volatile read) has Acquire semantics; a
reference to a global or static object that occurs after a read of
volatile memory in the instruction sequence will occur after that
volatile read in the compiled binary.
However, you might take note of the fact that if you follow the above link, there is some debate in the comments as to whether or not acquire/release semantics actually apply in this case.
In C++11, don't use volatile for threading, only for MMIO
But TL:DR, it does "work" sort of like atomic with mo_relaxed on hardware with coherent caches (i.e. everything); it is sufficient to stop compilers keeping vars in registers. atomic doesn't need memory barriers to create atomicity or inter-thread visibility, only to make the current thread wait before/after an operation to create ordering between this thread's accesses to different variables. mo_relaxed never needs any barriers, just load, store, or RMW.
For roll-your-own atomics with volatile (and inline-asm for barriers) in the bad old days before C++11 std::atomic, volatile was the only good way to get some things to work. But it depended on a lot of assumptions about how implementations worked and was never guaranteed by any standard.
For example the Linux kernel still uses its own hand-rolled atomics with volatile, but only supports a few specific C implementations (GNU C, clang, and maybe ICC). Partly that's because of GNU C extensions and inline asm syntax and semantics, but also because it depends on some assumptions about how compilers work.
It's almost always the wrong choice for new projects; you can use std::atomic (with std::memory_order_relaxed) to get a compiler to emit the same efficient machine code you could with volatile. std::atomic with mo_relaxed obsoletes volatile for threading purposes. (except maybe to work around missed-optimization bugs with atomic<double> on some compilers.)
The internal implementation of std::atomic on mainstream compilers (like gcc and clang) does not just use volatile internally; compilers directly expose atomic load, store and RMW builtin functions. (e.g. GNU C __atomic builtins which operate on "plain" objects.)
Volatile is usable in practice (but don't do it)
That said, volatile is usable in practice for things like an exit_now flag on all(?) existing C++ implementations on real CPUs, because of how CPUs work (coherent caches) and shared assumptions about how volatile should work. But not much else, and is not recommended. The purpose of this answer is to explain how existing CPUs and C++ implementations actually work. If you don't care about that, all you need to know is that std::atomic with mo_relaxed obsoletes volatile for threading.
(The ISO C++ standard is pretty vague on it, just saying that volatile accesses should be evaluated strictly according to the rules of the C++ abstract machine, not optimized away. Given that real implementations use the machine's memory address-space to model C++ address space, this means volatile reads and assignments have to compile to load/store instructions to access the object-representation in memory.)
As another answer points out, an exit_now flag is a simple case of inter-thread communication that doesn't need any synchronization: it's not publishing that array contents are ready or anything like that. Just a store that's noticed promptly by a not-optimized-away load in another thread.
// global
bool exit_now = false;
// in one thread
while (!exit_now) { do_stuff; }
// in another thread, or signal handler in this thread
exit_now = true;
Without volatile or atomic, the as-if rule and assumption of no data-race UB allows a compiler to optimize it into asm that only checks the flag once, before entering (or not) an infinite loop. This is exactly what happens in real life for real compilers. (And usually optimize away much of do_stuff because the loop never exits, so any later code that might have used the result is not reachable if we enter the loop).
// Optimizing compilers transform the loop into asm like this
if (!exit_now) { // check once before entering loop
while(1) do_stuff; // infinite loop
}
Multithreading program stuck in optimized mode but runs normally in -O0 is an example (with description of GCC's asm output) of how exactly this happens with GCC on x86-64. Also MCU programming - C++ O2 optimization breaks while loop on electronics.SE shows another example.
We normally want aggressive optimizations that CSE and hoist loads out of loops, including for global variables.
Before C++11, volatile bool exit_now was one way to make this work as intended (on normal C++ implementations). But in C++11, data-race UB still applies to volatile so it's not actually guaranteed by the ISO standard to work everywhere, even assuming HW coherent caches.
Note that for wider types, volatile gives no guarantee of lack of tearing. I ignored that distinction here for bool because it's a non-issue on normal implementations. But that's also part of why volatile is still subject to data-race UB instead of being equivalent to relaxed atomic.
Note that "as intended" doesn't mean the thread doing exit_now waits for the other thread to actually exit. Or even that it waits for the volatile exit_now=true store to even be globally visible before continuing to later operations in this thread. (atomic<bool> with the default mo_seq_cst would make it wait before any later seq_cst loads at least. On many ISAs you'd just get a full barrier after the store).
C++11 provides a non-UB way that compiles the same
A "keep running" or "exit now" flag should use std::atomic<bool> flag with mo_relaxed
Using
flag.store(true, std::memory_order_relaxed)
while( !flag.load(std::memory_order_relaxed) ) { ... }
will give you the exact same asm (with no expensive barrier instructions) that you'd get from volatile flag.
As well as no-tearing, atomic also gives you the ability to store in one thread and load in another without UB, so the compiler can't hoist the load out of a loop. (The assumption of no data-race UB is what allows the aggressive optimizations we want for non-atomic non-volatile objects.) This feature of atomic<T> is pretty much the same as what volatile does for pure loads and pure stores.
atomic<T> also make += and so on into atomic RMW operations (significantly more expensive than an atomic load into a temporary, operate, then a separate atomic store. If you don't want an atomic RMW, write your code with a local temporary).
With the default seq_cst ordering you'd get from while(!flag), it also adds ordering guarantees wrt. non-atomic accesses, and to other atomic accesses.
(In theory, the ISO C++ standard doesn't rule out compile-time optimization of atomics. But in practice compilers don't because there's no way to control when that wouldn't be ok. There are a few cases where even volatile atomic<T> might not be enough control over optimization of atomics if compilers did optimize, so for now compilers don't. See Why don't compilers merge redundant std::atomic writes? Note that wg21/p0062 recommends against using volatile atomic in current code to guard against optimization of atomics.)
volatile does actually work for this on real CPUs (but still don't use it)
even with weakly-ordered memory models (non-x86). But don't actually use it, use atomic<T> with mo_relaxed instead!! The point of this section is to address misconceptions about how real CPUs work, not to justify volatile. If you're writing lockless code, you probably care about performance. Understanding caches and the costs of inter-thread communication is usually important for good performance.
Real CPUs have coherent caches / shared memory: after a store from one core becomes globally visible, no other core can load a stale value. (See also Myths Programmers Believe about CPU Caches which talks some about Java volatiles, equivalent to C++ atomic<T> with seq_cst memory order.)
When I say load, I mean an asm instruction that accesses memory. That's what a volatile access ensures, and is not the same thing as lvalue-to-rvalue conversion of a non-atomic / non-volatile C++ variable. (e.g. local_tmp = flag or while(!flag)).
The only thing you need to defeat is compile-time optimizations that don't reload at all after the first check. Any load+check on each iteration is sufficient, without any ordering. Without synchronization between this thread and the main thread, it's not meaningful to talk about when exactly the store happened, or ordering of the load wrt. other operations in the loop. Only when it's visible to this thread is what matters. When you see the exit_now flag set, you exit. Inter-core latency on a typical x86 Xeon can be something like 40ns between separate physical cores.
In theory: C++ threads on hardware without coherent caches
I don't see any way this could be remotely efficient, with just pure ISO C++ without requiring the programmer to do explicit flushes in the source code.
In theory you could have a C++ implementation on a machine that wasn't like this, requiring compiler-generated explicit flushes to make things visible to other threads on other cores. (Or for reads to not use a maybe-stale copy). The C++ standard doesn't make this impossible, but C++'s memory model is designed around being efficient on coherent shared-memory machines. E.g. the C++ standard even talks about "read-read coherence", "write-read coherence", etc. One note in the standard even points the connection to hardware:
http://eel.is/c++draft/intro.races#19
[ Note: The four preceding coherence requirements effectively disallow compiler reordering of atomic operations to a single object, even if both operations are relaxed loads. This effectively makes the cache coherence guarantee provided by most hardware available to C++ atomic operations. — end note ]
There's no mechanism for a release store to only flush itself and a few select address-ranges: it would have to sync everything because it wouldn't know what other threads might want to read if their acquire-load saw this release-store (forming a release-sequence that establishes a happens-before relationship across threads, guaranteeing that earlier non-atomic operations done by the writing thread are now safe to read. Unless it did further writes to them after the release store...) Or compilers would have to be really smart to prove that only a few cache lines needed flushing.
Related: my answer on Is mov + mfence safe on NUMA? goes into detail about the non-existence of x86 systems without coherent shared memory. Also related: Loads and stores reordering on ARM for more about loads/stores to the same location.
There are I think clusters with non-coherent shared memory, but they're not single-system-image machines. Each coherency domain runs a separate kernel, so you can't run threads of a single C++ program across it. Instead you run separate instances of the program (each with their own address space: pointers in one instance aren't valid in the other).
To get them to communicate with each other via explicit flushes, you'd typically use MPI or other message-passing API to make the program specify which address ranges need flushing.
Real hardware doesn't run std::thread across cache coherency boundaries:
Some asymmetric ARM chips exist, with shared physical address space but not inner-shareable cache domains. So not coherent. (e.g. comment thread an A8 core and an Cortex-M3 like TI Sitara AM335x).
But different kernels would run on those cores, not a single system image that could run threads across both cores. I'm not aware of any C++ implementations that run std::thread threads across CPU cores without coherent caches.
For ARM specifically, GCC and clang generate code assuming all threads run in the same inner-shareable domain. In fact, the ARMv7 ISA manual says
This architecture (ARMv7) is written with an expectation that all processors using the same operating system or hypervisor are in the same Inner Shareable shareability domain
So non-coherent shared memory between separate domains is only a thing for explicit system-specific use of shared memory regions for communication between different processes under different kernels.
See also this CoreCLR discussion about code-gen using dmb ish (Inner Shareable barrier) vs. dmb sy (System) memory barriers in that compiler.
I make the assertion that no C++ implementation for other any other ISA runs std::thread across cores with non-coherent caches. I don't have proof that no such implementation exists, but it seems highly unlikely. Unless you're targeting a specific exotic piece of HW that works that way, your thinking about performance should assume MESI-like cache coherency between all threads. (Preferably use atomic<T> in ways that guarantees correctness, though!)
Coherent caches makes it simple
But on a multi-core system with coherent caches, implementing a release-store just means ordering commit into cache for this thread's stores, not doing any explicit flushing. (https://preshing.com/20120913/acquire-and-release-semantics/ and https://preshing.com/20120710/memory-barriers-are-like-source-control-operations/). (And an acquire-load means ordering access to cache in the other core).
A memory barrier instruction just blocks the current thread's loads and/or stores until the store buffer drains; that always happens as fast as possible on its own. (Or for LoadLoad / LoadStore barriers, block until previous loads have completed.) (Does a memory barrier ensure that the cache coherence has been completed? addresses this misconception). So if you don't need ordering, just prompt visibility in other threads, mo_relaxed is fine. (And so is volatile, but don't do that.)
See also C/C++11 mappings to processors
Fun fact: on x86, every asm store is a release-store because the x86 memory model is basically seq-cst plus a store buffer (with store forwarding).
Semi-related re: store buffer, global visibility, and coherency: C++11 guarantees very little. Most real ISAs (except PowerPC) do guarantee that all threads can agree on the order of a appearance of two stores by two other threads. (In formal computer-architecture memory model terminology, they're "multi-copy atomic").
Will two atomic writes to different locations in different threads always be seen in the same order by other threads?
Concurrent stores seen in a consistent order
Another misconception is that memory fence asm instructions are needed to flush the store buffer for other cores to see our stores at all. Actually the store buffer is always trying to drain itself (commit to L1d cache) as fast as possible, otherwise it would fill up and stall execution. What a full barrier / fence does is stall the current thread until the store buffer is drained, so our later loads appear in the global order after our earlier stores.
Are loads and stores the only instructions that gets reordered?
x86 mfence and C++ memory barrier
Globally Invisible load instructions
(x86's strongly ordered asm memory model means that volatile on x86 may end up giving you closer to mo_acq_rel, except that compile-time reordering with non-atomic variables can still happen. But most non-x86 have weakly-ordered memory models so volatile and relaxed are about as weak as mo_relaxed allows.)
(Editor's note: in C++11 volatile is not the right tool for this job and still has data-race UB. Use std::atomic<bool> with std::memory_order_relaxed loads/stores to do this without UB. On real implementations it will compile to the same asm as volatile. I added an answer with more detail, and also addressing the misconceptions in comments that weakly-ordered memory might be a problem for this use-case: all real-world CPUs have coherent shared memory so volatile will work for this on real C++ implementations. But still don't do it.
Some discussion in comments seems to be talking about other use-cases where you would need something stronger than relaxed atomics. This answer already points out that volatile gives you no ordering.)
Volatile is occasionally useful for the following reason: this code:
/* global */ bool flag = false;
while (!flag) {}
is optimized by gcc to:
if (!flag) { while (true) {} }
Which is obviously incorrect if the flag is written to by the other thread. Note that without this optimization the synchronization mechanism probably works (depending on the other code some memory barriers may be needed) - there is no need for a mutex in 1 producer - 1 consumer scenario.
Otherwise the volatile keyword is too weird to be useable - it does not provide any memory ordering guarantees wrt both volatile and non-volatile accesses and does not provide any atomic operations - i.e. you get no help from the compiler with volatile keyword except disabled register caching.
You need volatile and possibly locking.
volatile tells the optimiser that the value can change asynchronously, thus
volatile bool flag = false;
while (!flag) {
/*do something*/
}
will read flag every time around the loop.
If you turn optimisation off or make every variable volatile a program will behave the same but slower. volatile just means 'I know you may have just read it and know what it says, but if I say read it then read it.
Locking is a part of the program. So ,by the way, if you are implementing semaphores then among other things they must be volatile. (Don't try it, it is hard, will probably need a little assembler or the new atomic stuff, and it has already been done.)
#include <iostream>
#include <thread>
#include <unistd.h>
using namespace std;
bool checkValue = false;
int main()
{
std::thread writer([&](){
sleep(2);
checkValue = true;
std::cout << "Value of checkValue set to " << checkValue << std::endl;
});
std::thread reader([&](){
while(!checkValue);
});
writer.join();
reader.join();
}
Once an interviewer who also believed that volatile is useless argued with me that Optimisation wouldn't cause any issues and was referring to different cores having separate cache lines and all that (didn't really understand what he was exactly referring to). But this piece of code when compiled with -O3 on g++ (g++ -O3 thread.cpp -lpthread), it shows undefined behaviour. Basically if the value gets set before the while check it works fine and if not it goes into a loop without bothering to fetch the value (which was actually changed by the other thread). Basically i believe the value of checkValue only gets fetched once into the register and never gets checked again under the highest level of optimisation. If its set to true before the fetch, it works fine and if not it goes into a loop. Please correct me if am wrong.
How does the compiler or OS distinguish between sig_atomic_t type and a normal int type variable, and ensures that the operation will be atomic? Programs using both have same assembler code. How extra care is taken to make the operation atomic?
sig_atomic_t is not an atomic data type. It is just the data type that you are allowed to use in the context of a signal handler, that is all. So better read the name as "atomic relative to signal handling".
To guarantee communication with and from a signal handler, only one of the properties of atomic data types is needed, namely the fact that read and update will always see a consistent value. Other data types (such as perhaps long long) could be written with several assembler instructions for the lower and higher part, e.g. sig_atomic_t is guaranteed to be read and written in one go.
So a platform may choose any integer base type as sig_atomic_t for which it can make the guarantee that volatile sig_atomic_t can be safely used in signal handlers. Many platforms chose int for this, because they know that for them int is written with a single instruction.
The latest C standard, C11, has atomic types, but which are a completely different thing. Some of them (those that are "lockfree") may also be used in signal handlers, but that again is a completely different story.
Note that sig_atomic_t is not thread-safe, only async-signal safe.
Atomics involve two types of barriers:
Compiler barrier. It makes sure that the compiler does not reorder reads/writes from/to an atomic variable relative to reads and writes to other variables. This is what volatile keyword does.
CPU barrier and visibility. It makes sure that the CPU does not reorder reads and writes. On x86 all loads and stores to aligned 1,2,4,8-byte storage are atomic. Visibility makes sure that stores become visible to other threads. Again, on Intel CPUs, stores are visible immediately to other threads due to cache coherence and memory coherence protocol MESI. But that may change in the future. See §8.1 LOCKED ATOMIC OPERATIONS in Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3A for more details.
For comprehensive treatment of the subject watch atomic Weapons: The C++ Memory Model and Modern Hardware.
sig_atomic_t is often just a typedef (to some system specific integral type, generally int or long). And it is very important to use volatile sig_atomic_t (not just sig_atomic_t alone).
When you add the volatile keyword, the compiler has to avoid a lot of optimizations.
The recent C11 standard added _Atomic and <stdatomic.h>. You need a very recent GCC (e.g. 4.9) to have it supported.
Programs using both have same assembler code. How extra care is taken to make the operation atomic?
Although this is an old question, I think it's still worth addressing this part of the question specifically. On Linux, sig_atomic_t is provided by glibc. sig_atomic_t in glibc is a typedef for int and has no special treatment (as of this post). The glibc docs address this:
In practice, you can assume that int is atomic. You can also assume
that pointer types are atomic; that is very convenient. Both of these
assumptions are true on all of the machines that the GNU C Library
supports and on all POSIX systems we know of.
In other words, it just so happens that regular int already satisfies the requirements of sig_atomic_t on all the platforms that glibc supports and no special support is needed. Nonetheless, the C and POSIX standards mandate sig_atomic_t because there could be some exotic machine on which we want to implement C and POSIX for which int does not fulfill the requirements of sig_atomic_t.
This data type seems to be atomic.
From here:
24.4.7.2 Atomic Types To avoid uncertainty about interrupting access to a variable, you can use a particular data type for which access is
always atomic: sig_atomic_t. Reading and writing this data type is
guaranteed to happen in a single instruction, so there’s no way for a
handler to run “in the middle” of an access.
The type sig_atomic_t is always an integer data type, but which one it
is, and how many bits it contains, may vary from machine to machine.
Data Type: sig_atomic_t This is an integer data type. Objects of this
type are always accessed atomically.
In practice, you can assume that int is atomic. You can also assume
that pointer types are atomic; that is very convenient. Both of these
assumptions are true on all of the machines that the GNU C Library
supports and on all POSIX systems we know of.
It pays off to have studied some kernel-development-level memory models...
Anyway, sig_atomic_t is atomic. The normal definition of atomic is that you can't get a "partial" result, e.g. due to concurrent writes, or concurrent read and write. Attaching any other properties to "atomic" is dangerous, and causes the type of confusion seen here.
So, when you do any sort of sig_atomic_t store, you are guaranteed to either get the old value, or the new value when something reads it back -- be it before, during, or after that store.
Answering your direct question about "how that works": the compiler will use an underlying type size and issue extra machine instructions where required, to signal the CPU that it must do an atomic store and atomic read.
All that said, it is important to note that you really can't say much about whether you will get the old or the new value when you try to read an atomic variable like sig_atomic_t. All you know is that you will not get a mix of two different stores that raced each other, nor a mix of the old and the new value while a store is happening concurrently with your read.
In C, you also normally need to declare variables as "volatile sig_atomic_t" because otherwise the compiler has no reason to not cache it, and you could be using an older value for longer than expected: the compiler has no reason to force a fresh memory read if it already has an old value in a register from a previous read. "volatile" tells the compiler to always do a fresh memory read when it needs to get the value of the variable.
Note that neither "volatile" nor "sig_atomic_t" are strong enough "compiler barriers" to ensure it is not reordered around by the compiler optimizer, let alone by the CPU itself (which would require a memory barrier, not just a compiler barrier). If you need any visibility constraints re. other threads, processors, and even hardware when doing MMIO, you need "extra stuff" (compiler barriers, and memory barriers).
And that's where C11 _Atomic and the C11 memory models come into play. They're not about "atomic" reads and stores only, they also include a lot of visibility rules and constraints re. other entities (MMIO devices, other execution threads, other processors).
I'd like to be able to use something like this to make access to my ports clearer:
typedef struct {
unsigned rfid_en: 1;
unsigned lcd_en: 1;
unsigned lcd_rs: 1;
unsigned lcd_color: 3;
unsigned unused: 2;
} portc_t;
extern volatile portc_t *portc;
But is it safe? It works for me, but...
1) Is there a chance of race conditions?
2) Does gcc generate read-modify-write cycles for code that modifies a single field?
3) Is there a safe way to update multiple fields?
4) Is the bit packing and order guaranteed? (I don't care about portability in this case, so gcc-specific options to make it Do What I Mean are fine.)
Handling race conditions must be done by operating system level calls (which will indeed use read-modify-writes), GCC won't do that.
Idem., and no GCC does not generate read-modify-write instructions for volatile. However, a CPU will normally do the write atomically (simply because it's one instruction). This holds true if the bit-field stays within an int for example, but this is CPU/implementation dependent; I mean some may guarantee this up to 8-byte value, while other only up to 4-byte values. So under that condition, bits can't be mixed up (i.e. a few written from one thread, and others from another thread won't occur).
The only way to set multiple fields at the same time, is to set these values in an intermediate variable, and then assign this variable to the volatile.
The C standard specifies that bits are packed together (it seems that there might be exceptions when you start mixing types, but I've never seen that; everyone always uses unsigned ...).
Note: Defining something volatile does not cause a compiler to generate read-modify-writes. What volatile does is telling the compiler that an assignment to that pointer/address must always be made, and may not be optimised away.
Here's another post about the same subject matter. I found there to be quite a few other places where you can find more details.
The keyword volatile has nothing to do with race conditions, or what thread is accessing code. The keyword tells the compiler not to cache the value in registers. It tells the compiler to generate code so that every access goes to the location allocated to the variable, because each access may see a different value. This is the case with memory mapped peripherals. This doesn't help if your MPU has it's own cache. There are usually special instructions or un-cached areas of the memory map to ensure the location, and not a cached copy, is read.
As for being thread safe, just remember that even a memory access may not be thread safe is it is done in two instructions. E.g. in 8051 assembler, you have to get a 16 bit value one byte at a time. The instruction sequence can be interrupted by an IRQ or another thread and the second byte read or written, potentially corrupted.
A quick question I've been wondering about for some time; Does the CPU assign values atomically, or, is it bit by bit (say for example a 32bit integer).
If it's bit by bit, could another thread accessing this exact location get a "part" of the to-be-assigned value?
Think of this:
I have two threads and one shared "unsigned int" variable (call it "g_uiVal").
Both threads loop.
On is printing "g_uiVal" with printf("%u\n", g_uiVal).
The second just increase this number.
Will the printing thread ever print something that is totally not or part of "g_uiVal"'s value?
In code:
unsigned int g_uiVal;
void thread_writer()
{
g_uiVal++;
}
void thread_reader()
{
while(1)
printf("%u\n", g_uiVal);
}
Depends on the bus widths of the CPU and memory. In a PC context, with anything other than a really ancient CPU, accesses of up to 32 bit accesses are atomic; 64-bit accesses may or may not be. In the embedded space, many (most?) CPUs are 32 bits wide and there is no provision for anything wider, so your int64_t is guaranteed to be non-atomic.
I believe the only correct answer is "it depends". On what you may ask?
Well for starters which CPU. But also some CPUs are atomic for writing word width values, but only when aligned. It really is not something you can guarantee at a C language level.
Many compilers offer "intrinsics" to emit correct atomic operations. These are extensions which act like functions, but emit the correct code for your target architecture to get the needed atomic operations. For example: http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html
You said "bit-by-bit" in your question. I don't think any architecture does operations a bit at a time, except with some specialized serial protocol busses. Standard memory read/writes are done with 8, 16, 32, or 64 bits of granularity. So it is POSSIBLE the operation in your example is atomic.
However, the answer is heavily platform dependent.
It depends on the CPU's capabilities.
Can the hardware do an atomic 32-bit
operation? Here's a hint: If the
variable you are working on is larger
than the native register size (e.g.
64-bit int on a 32-bit system), it's
definitely NOT atomic.
It depends on how the compiler
generates the machine code. It could
have turned your 32-bit variable
access into 4x 8-bit memory reads.
It gets tricky if the address of what
you are accessing is not aligned
across a machine's natural word
boundary. You can hit a a cache
fault or page fault.
It is VERY POSSIBLE that you would see a corrupt or unexpected value using the code example that you posted.
Your platform probably provides some method of doing atomic operations. In the case of a Windows platform, it is via the Interlocked functions. In the case of Linux/Unix, look at the atomic_t type.
To add to what has been said so far - another potential concern is caching. CPUs tend to work with the local (on die) memory cache which may or may not be immediately flushed back to the main memory. If the box has more than one CPU, it is possible that another CPU will not see the changes for some time after the modifying CPU made them - unless there is some synchronization command informing all CPUs that they should synchronize their on-die caches. As you can imagine such synchronization can considerably slow the processing down.
Don't forget that the compiler assumes single-thread when optimizing, and this whole thing could just go away.
POSIX defines the special type sig_atomic_t which guarentees that writes to it are atomic with respect to signals, which will make it also atomic from the point of view of other threads like you want. They don't specifically define an atomic cross-thread type like this, since thread communication is expected to be mediated by mutexes or other sychronization primitives.
Considering modern microprocessors (and ignoring microcontrollers), the 32-bit assignment is atomic, not bit-by-bit.
However, now completely off of your question's topic... the printing thread could still print something that is not expected because of the lack of synchronization in this example, of course, due to instruction reordering and multiple cores each with their own copy of g_uiVal in their caches.