What is the GCC builtin for an atomic set? - c

I see the list of builtins at https://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html. But for an atomic set, do you need to use the pair __sync_lock_test_and_set and __sync_lock_release?
I have seen this example of this on https://attractivechaos.wordpress.com/2011/10/06/multi-threaded-programming-efficiency-of-locking/.
volatile int lock = 0;
void *worker(void*)
{
while (__sync_lock_test_and_set(&lock, 1));
// critical section
__sync_lock_release(&lock);
}
But if I use this example, and do my atomic set inside the critical section, then atomic sets to different variables will be unnecessarily serialized.
Appreciate any input on how to do an atomic set where I have multiple atomic variables.

As per definition need to use both
__sync_synchronize (...)
This builtin issues a full memory barrier. type
__sync_lock_test_and_set (type *ptr, type value, ...)
This builtin, as described by Intel, is not a traditional test-and-set
operation, but rather an atomic exchange operation. It writes value
into *ptr, and returns the previous contents of *ptr. Many targets
have only minimal support for such locks, and do not support a full
exchange operation. In this case, a target may support reduced
functionality here by which the only valid value to store is the
immediate constant 1. The exact value actually stored in *ptr is
implementation defined.
This builtin is not a full barrier, but rather an acquire barrier.
This means that references after the builtin cannot move to (or be
speculated to) before the builtin, but previous memory stores may not
be globally visible yet, and previous memory loads may not yet be
satisfied.
void __sync_lock_release (type *ptr, ...)
This builtin releases the
lock acquired by __sync_lock_test_and_set. Normally this means writing
the constant 0 to *ptr. This builtin is not a full barrier, but rather
a release barrier. This means that all previous memory stores are
globally visible, and all previous memory loads have been satisfied,
but following memory reads are not prevented from being speculated to
before the barrier.

I came up with this solution. Please reply if you know a better one:
typedef struct {
volatile int lock; // must be initialized to 0 before 1st call to atomic64_set
volatile long long counter;
} atomic64_t;
static inline void atomic64_set(atomic64_t *v, long long i)
{
// see https://attractivechaos.wordpress.com/2011/10/06/multi-threaded-programming-efficiency-of-locking/
// for an explanation of __sync_lock_test_and_set
while (__sync_lock_test_and_set(&v->lock, 1)) { // we don't have the lock, so busy wait until
while (v->lock); // it is released (i.e. lock is set to 0)
} // by the holder via __sync_lock_release()
// critical section
v->counter = i;
__sync_lock_release(&v->lock);
}

Related

Testing lockless buffer copy in C using memory barriers

I have a few questions regarding memory barriers.
Say I have the following C code (it will be run both from C++ and C code, so atomics are not possible) that writes an array into another one. Multiple threads may call thread_func(), and I want to make sure that my_str is returned only after it was initialized fully. In this case, it is a given that the last byte of the buffer can't be 0. As such, checking for the last byte as not 0, should suffice.
Due to reordering by compiler/CPU, this can be a problem as the last byte might get written before previous bytes, causing my_str to be returned with a partially copied buffer. So to get around this, I want to use a memory barrier. A mutex will work of course, but would be too heavy for my uses.
Keep in mind that all threads will call thread_func() with the same input, so even if multiple threads call init() a couple of times, it's OK as long as in the end, thread_func() returns a valid my_str, and that all subsequent calls after initialization return my_str directly.
Please tell me if all the following different code approaches work, or if there could be issues in some scenarios as aside from getting the solution to the problem, I'd like to get some more information regarding memory barriers.
__sync_bool_compare_and_swap on last byte. If I understand correctly, any memory store/load would not be reordered, not just the one for the particular variable that is sent to the command. Is that correct? if so, I would expect this to work as all previous writes of the previous bytes should be made before the barrier moves on.
#define STR_LEN 100
static uint8_t my_str[STR_LEN] = {0};
static void init(uint8_t input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN - 1; ++i) {
my_str[i] = input_buf[i];
}
__sync_bool_compare_and_swap(my_str, 0, input_buf[STR_LEN - 1]);
}
const char * thread_func(char input_buf[STR_LEN])
{
if (my_str[STR_LEN - 1] == 0) {
init(input_buf);
}
return my_str;
}
__sync_bool_compare_and_swap on each write. I would expect this to work as well, but to be slower than the first one.
static void init(char input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN; ++i) {
__sync_bool_compare_and_swap(my_str + i, 0, input_buf[i]);
}
}
__sync_synchronize before each byte copy. I would expect this to work as well, but is this slower or faster than (2)? __sync_bool_compare_and_swap is supposed to be a full barrier as well, so which would be preferable?
static void init(char input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN; ++i) {
__sync_synchronize();
my_str[i] = input_buf[i];
}
}
__sync_synchronize by condition. As I understand it, __sync_synchronize is both a HW and SW memory barrier. As such, since the compiler can't tell the value of use_sync it shouldn't reorder. And the HW reordering will be done only if use_sync is true. is that correct?
static void init(char input_buf[STR_LEN], bool use_sync)
{
for (int i = 0; i < STR_LEN; ++i) {
if (use_sync) {
__sync_synchronize();
}
my_str[i] = input_buf[i];
}
}
GNU C legacy __sync builtins are not recommended for new code, as the manual says.
Use the __atomic builtins which can take a memory-order parameter like C11 stdatomic. But they're still builtins and still work on plain types not declared _Atomic, so using them is like C++20 std::atomic_ref. In C++20, use std::atomic_ref<unsigned char>(my_str[STR_LEN - 1]), but C doesn't provide an equivalent so you'd have to use compiler builtins to hand-roll it.
Just do the last store separately with a release store in the writer, not an RMW, and definitely not a full memory barrier (__sync_synchronize()) between every byte!!! That's way slower than necessary, and defeats any optimization to use memcpy. Also, you need the store of the final byte to be at least RELEASE, not a plain store, so readers can synchronize with it. See also Who's afraid of a big bad optimizing compiler? re: how exactly compilers can break your code if you try to hand-roll lockless code with just barriers, not atomic loads or stores. (It's written for Linux kernel code, where a macro would use *(volatile char*) to hand-roll something close to __atomic_store_n with __ATOMIC_RELAXED`)
So something like
__atomic_store_n(&my_str[STR_LEN - 1], input_buf[STR_LEN - 1], __ATOMIC_RELEASE);
The if (my_str[STR_LEN - 1] == 0) load in thread_func is of course data-race UB when there are concurrent writers.
For safety it needs to be an acquire load, like __atomic_load_n(&my_str[STR_LEN - 1], __ATOMIC_ACQUIRE) == 0, since you need a thread that loads a non-0 value to also see all other stores by another thread that ran init(). (Which did a release-store to that location, creating acquire/release synchronization and guaranteeing a happens-before relationship between these threads.)
See https://preshing.com/20120913/acquire-and-release-semantics/
Writing the same value non-atomically is also UB in ISO C and ISO C++. See Race Condition with writing same value in C++? and others.
But in practice it should be fine except with clang -fsanitize=thread. In theory a DeathStation9000 could implement non-atomic stores by storing value+1 and then subtracting 1, so temporarily there's be a different value in memory. But AFAIK there aren't real compilers that do that. I'd have a look at the generated asm on any new compiler / ISA combination you're trying, just to make sure.
It would be hard to test; the init stuff can only race once per program invocation. But there's no fully safe way to do it that doesn't totally suck for performance, AFAIK. Perhaps doing the init with a cast to _Atomic unsigned char* or typedef _Atomic unsigned long __attribute__((may_alias)) aliasing_atomic_ulong; as a building block for a manual copy loop?
Bonus question: if(use_sync) __sync_synchronize() inside the loop.
Since the compiler can't tell the value of use_sync it shouldn't reorder.
Optimization is possible to asm that works something like if(use_sync) { slow barrier loop } else { no-barrier loop }. This is called "loop unswitching": making two loops and branching once to decide which to run, instead of every iteration. GCC has been able to do that optimization (in some cases) since 3.4. So that defeats your attempt to take advantage of how the compiler would compile to trick it into doing more ordering than the source actually requires.
And the HW reordering will be done only if use_sync is true.
Yes, that part is correct.
Also, inlining and constant-propagation of use_sync could easily defeat this, unless use_sync was a volatile global or something. At that point you might as well just make a separate _Atomic unsigned char array_init_done flag / guard variable.
And you can use it for mutual exclusion by having threads try to set it to 1 with int old = guard.exchange(1), with the winner of the race being the one to run init while they spin-wait (or C++20 .wait(1)) for the guard variable to become 2 or -1 or something, which the winner of the race will set after finishing init.
Have a look at the asm GCC makes for non-constant-initialized static local vars; they check a guard variable with an acquire load, only doing locking to have one thread do the run_once init stuff and the others wait for that result. IIRC there's a Q&A about doing that yourself with atomics.

Read optimizations on shared memory

Suppose you have a function that make several read access to a shared variable whose access is atomic. All in running in the same process. Imagine them as threads of a process or as a sw running on bare metal platform with no MMU.
As a requirement you must ensure that the value of that read is consistent for all the length of the function so the code must not re-read the memory location and have to put in a local variable or on a register. How can we ensure that this behaviour is respected?
As an example...
shared is the only shared variable
extern uint32_t a, b, shared;
void useless_function()
{
__ASM volatile ("":::"memory");
uint32_t value = shared;
a = value *2;
b = value << 3;
}
Can value be optimized out by direct readings of shared variable in some contexts? If yes, how can I be sure this cannot happen?
As a requirement you must ensure that the value of that read is consistent for all the length of the function so the code must not re-read the memory location and have to put in a local variable or on a register. How can we ensure that this behaviour is respected?
You can do that with READ_ONCE macro from Linux kernel:
/*
* Prevent the compiler from merging or refetching reads or writes. The
* compiler is also forbidden from reordering successive instances of
* READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
* particular ordering. One way to make the compiler aware of ordering is to
* put the two invocations of READ_ONCE or WRITE_ONCE in different C
* statements.
*
* These two macros will also work on aggregate data types like structs or
* unions. If the size of the accessed data type exceeds the word size of
* the machine (e.g., 32 bits or 64 bits) READ_ONCE() and WRITE_ONCE() will
* fall back to memcpy(). There's at least two memcpy()s: one for the
* __builtin_memcpy() and then one for the macro doing the copy of variable
* - '__u' allocated on the stack.
*
* Their two major use cases are: (1) Mediating communication between
* process-level code and irq/NMI handlers, all running on the same CPU,
* and (2) Ensuring that the compiler does not fold, spindle, or otherwise
* mutilate accesses that either do not require ordering or that interact
* with an explicit memory barrier or atomic instruction that provides the
* required ordering.
*/
E.g.:
uint32_t value = READ_ONCE(shared);
READ_ONCE macro essentially casts the object you read to be volatile because the compiler cannot emit extra reads or writes for volatile objects.
The above is equivalent to:
uint32_t value = *(uint32_t volatile*)&shared;
Alternatively:
uint32_t value;
memcpy(&value, &shared, sizeof value);
memcpy breaks the dependency between shared and value, so that the compiler cannot re-load shared instead of loading value.
In the example given you are not using the variable value in the function at all. So it will definitely be optimised.
Also, as mentioned in comments, in a multitasking system, the value of shared can be changed within the function.
What I need is that shared is read only once and it local value keeped for all function length and not re-evaluated
I would suggest something like this below.
extern uint32_t a, b, shared;
void useless_function()
{
__ASM volatile ("":::"memory");
uint32_t value = shared;
a = value*2;
b = value << 3;
}
Here shared is read only once in the function. It will be read again on next call of the function.

volatile and non-volatile bitfields

I'm writing a code for Cortex-M0 CPU and gcc. I've the following structure:
struct {
volatile unsigned flag1: 1;
unsigned flag2: 1;
unsigned foo; // something else accessed in main loop
} flags;
flag1 is read and written from both GPIO interrupt handler and main loop. flag2 is only read and written in main loop.
The ISR looks like this:
void handleIRQ(void) {
if (!flags.flag1) {
flags.flag1 = 1;
// enable some hw timer
}
}
The main loop looks like this:
for (;;) {
// disable IRQ
if (flags.flag1) {
// handle IRQ
flags.flag1 = 0;
// access (rw) flag2 many times
}
// wait for interrupt, enable IRQ
}
When accessing flag2 in main loop, will the compilier optimize access to it so it won't be fetched or stored to memory every time it is read or written to in code?
It's not clear to me because to set flag1 in ISR, it will need to load whole char, set a bit and store it back.
It is my reading of the C11 standard that it is not proper to use a bitfield for this - even if both of them were declared as volatile. The following excerpt is from 3.14 Memory location:
Memory location
Either an object of scalar type, or a maximal sequence of adjacent bit-fields all having nonzero width
NOTE 1 Two threads of execution can update and access separate memory locations without interfering with each other.
NOTE 2 It is not safe to concurrently update two non-atomic bit-fields in the same structure if all
members declared between them are also (non-zero-length) bit-fields, no matter what the sizes of those
intervening bit-fields happen to be.
There is no exception given for volatile. Thus it wouldn't be safe to use the above bitfield if both threads of execution (i.e. the main and the ISR) if ISR will update one flag and the main will update another. The solution given is to add a member of size 0 in between to force them be placed in different memory locations. But then again, it would mean that both flags would consume at least one byte of memory, so it is again just simpler to use a non-bit-field unsigned char or bool for them:
struct {
volatile bool flag1;
bool flag2;
unsigned foo; // something else accessed in main loop
} flags;
Now they will be placed in different memory locations and they can be updated without them interfering with each other.
However the volatile for flag1 is still strictly necessary because otherwise updates to flag1 would be side-effect free in the main thread, and the compiler could deduce that it can keep that field in a register only - or that nothing need to be updated at all.
However, one needs to note that under C11, even the guarantees of volatile might not be enough: 5.1.2.3p5:
When the processing of the abstract machine is interrupted by receipt of a signal, the values of objects that are neither lock-free atomic objects nor of type volatile sig_atomic_t are unspecified, as is the state of the floating-point environment. The value of any object modified by the handler that is neither a lock-free atomic object nor of type volatile sig_atomic_t becomes indeterminate when the handler exits, as does the state of the floating-point environment if it is modified by the handler and not restored to its original state.
Thus, if full compatibility is required, flag1 ought to be for example of type volatile _Atomic bool; it might even be possible to use an _Atomic bitfield. Both of these require a C11 compiler, however.
Then again, you can check the manuals of your compiler if they guarantee that an access to such volatile objects is also guaranteed to be atomic.
The volatile flag for just one bit isn't all that meaningful - it is possibly even harmful. What the compiler might do in practice is to allocate two chunks of memory, possibly each 32 bits wide. Because the volatile flag blocks it from combining the two bits inside the same allocated area, since there is no bit-level access instruction available.
When accessing flag2 in main loop, will the compilier optimize access to it so it won't be fetched or stored to memory every time it is read or written to in code?
That's hard to tell, depends on how many data registers there are available. Disassemble the code and see.
Overall, bit-fields are not recommended since they are so poorly defined by the standard. And in this case, the individual volatile bit might lead to extra memory getting allocated.
Instead, you should do this:
volatile bool flag1;
bool flag2;
Assuming those flags aren't part of a hardware register, in which case the code was incorrect from the start and they should both be volatile.

Can I read an atomic variable without atomic_load?

I have a single-writer, multiple-reader situation. There's a counter that one thread is writing to, and any thread may read this counter. Since the single writing thread doesn't have to worry about contending with other threads for data access, is the following code safe?
#include <stdatomic.h>
#include <stdint.h>
_Atomic uint32_t counter;
// Only 1 thread calls this function. No other thread is allowed to.
uint32_t increment_counter() {
atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed);
return counter; // This is the line in question.
}
// Any thread may call this function.
uint32_t load_counter() {
return atomic_load_explicit(&counter, memory_order_relaxed);
}
The writer thread just reads the counter directly without calling any atomic_load* function. This should be safe (since it's safe for multiple threads to read a value), but I don't know if declaring a variable _Atomic restricts you from using that variable directly, or if you're required to always read it using one of the atomic_load* functions.
Yes, all operations that you do on _Atomic objects are guaranteed to be effected as if you would issue the corresponding call with sequential consistency. And in your particular case an evaluation is equivalent to atomic_load.
But the algorithm as used there is wrong, because by doing an atomic_fetch_add and an evaluation the value that is returned may already be change by another thread. Correct would be
uint32_t ret = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed);
return ret+1;
This looks a bit suboptimal because the addition is done twice, but a good optimizer will sort this out.
If you rewrite the function, this question goes away:
uint32_t increment_counter() {
return 1 + atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed);
}

Clear variable on the stack

Code Snippet:
int secret_foo(void)
{
int key = get_secret();
/* use the key to do highly privileged stuff */
....
/* Need to clear the value of key on the stack before exit */
key = 0;
/* Any half decent compiler would probably optimize out the statement above */
/* How can I convince it not to do that? */
return result;
}
I need to clear the value of a variable key from the stack before returning (as shown in the code).
In case you are curious, this was an actual customer requirement (embedded domain).
You can use volatile (emphasis mine):
Every access (both read and write) made through an lvalue expression of volatile-qualified type is considered an observable side effect for the purpose of optimization and is evaluated strictly according to the rules of the abstract machine (that is, all writes are completed at some time before the next sequence point). This means that within a single thread of execution, a volatile access cannot be optimized out or reordered relative to another visible side effect that is separated by a sequence point from the volatile access.
volatile int key = get_secret();
volatile might be overkill sometimes as it would also affect all the other uses of a variable.
Use memset_s (since C11): http://en.cppreference.com/w/c/string/byte/memset
memset may be optimized away (under the as-if rules) if the object modified by this function is not accessed again for the rest of its lifetime. For that reason, this function cannot be used to scrub memory (e.g. to fill an array that stored a password with zeroes). This optimization is prohibited for memset_s: it is guaranteed to perform the memory write.
int secret_foo(void)
{
int key = get_secret();
/* use the key to do highly privileged stuff */
....
memset_s(&key, sizeof(int), 0, sizeof(int));
return result;
}
You can find other solutions for various platforms/C standards here: https://www.securecoding.cert.org/confluence/display/c/MSC06-C.+Beware+of+compiler+optimizations
Addendum: have a look at this article Zeroing buffer is insufficient which points out other problems (besides zeroing the actual buffer):
With a bit of care and a cooperative compiler, we can zero a buffer — but that's not what we need. What we need to do is zero every location where sensitive data might be stored. Remember, the whole reason we had sensitive information in memory in the first place was so that we could use it; and that usage almost certainly resulted in sensitive data being copied onto the stack and into registers.
Your key value might have been copied into another location (like a register or temporary stack/memory location) by the compiler and you don't have any control to clear that location.
If you go with dynamic allocation you can control wiping that memory and not be bound by what the system does with the stack.
int secret_foo(void)
{
int *key = malloc(sizeof(int));
*key = get_secret();
memset(key, 0, sizeof(int));
// other magical things...
return result;
}
One solution is to disable compiler optimizations for the section of the code that you dont want optimizations:
int secret_foo(void) {
int key = get_secret();
#pragma GCC push_options
#pragma GCC optimize ("O0")
key = 0;
#pragma GCC pop_options
return result;
}

Resources