builtin gcc spinlock - c

How can I be sure the data that is written by multiple CPU cores during a mutex lock is synchronized across all L1 caches of all cores ? I am not talking about the variable that represents the lock, I am talking about the memory locations that are involved during the lock.
This is for Linux, x86_64, and my code is:
#include <sys/types.h>
#include "dlog.h"
uint *dlog_line;
volatile int dlog_lock;
char *dlog_get_new_line(void) {
uint val;
while(!__sync_bool_compare_and_swap(&dlog_lock, 0, 1)) {
val=*dlog_line;
if (val==DT_DLOG_MAX_LINES) val=0;
*dlog_line=val;
}
dlog_lock = 0;
}
Here, inside dlog_get_new_line() function, I use gcc builtin function so there shouldn't be any problem with aquiring the lock. But how can I ensure that when the lock is released, the value pointed by *dlog_line propagates into all the L1 cache of all the other CPU cores in the system?
I do not use pthreads, each process runs on different cpu core.

What you're interested in is called cache coherence. This is done automatically by the hardware.
So in short, you don't have to do anything if you are correctly using __sync_bool_compare_and_swap() (or any other locking intrinsic).
As an oversimplfied explanation, the thread will not return from the call to __sync_bool_compare_and_swap() until all the other processors are able to see the new value or are aware that their local copy is out-of-date.
If you're interested in what happens underneath (in the hardware), there are various cache coherence algorithms that are used to ensure that a core doesn't read an outdated copy of data.
Here's a partial list of commonly taught protocols:
MSI
MESI
Firefly
Modern hardware will typically have much more complicated algorithms for it.

Gcc has two other builtins that are exactly invented for the purpose you describe: __sync_lock_test_and_set and __sync_lock_release. They have so-called acquire/release semantics which guarantees you that stored values of other variables are visible as you need them while you hold your spinlock. These requirements are a bit weaker than what __sync_bool_compare_and_swap provides, so better use the tools that are tailored for the job.
They should well adapt to the capacity of different hardware. E.g on my x86_64 this puts an mfence instruction before the final atomic store into dlog_lock, but on different hardware this will be adapted to the available instruction set.

Related

MultiCore Shared Memory Atomic Operations

Would it be necessary to use a mutex for atomic operations on shared memory, in a multicore environment, where one CPU is only ever reading and the other CPU is only ever writing? I am guessing that this may depend on architecture, so if an example is needed then ARM (Cortex) and/or ESP32?
I already know that a mutex is not needed for atomic operations in a single-core environment where one thread is only ever reading and the other thread only ever writing (https://www.freertos.org/FreeRTOS_Support_Forum_Archive/May_2019/freertos_Shared_variable_between_a_write_thred_and_a_read_thread_a0408decbaj.html).
One solution that has been around for decades (I already used this 30 years ago) is the concept of mailboxes.
Simplest mailbox is just a structure or buffer with a flag. This flag should be of the minimum size that can be accessed in an atomic operation from both processors sharing the memory. It should also be located at a memory address that both processors see as "aligned" to ensure single-cycle read/write accesses, e.g. 32 bit word boundaries in the case of 32 bit ARM processors. This might be tricky to implement in non- RISC-alike architectures.
The flag usage is very simple. The processor that writes the data waits for the flag to be signalled as "buffer empty", maybe a simple null value, then write the data to the mailbox's buffer and signal "buffer not empty" by setting a magic number into the flag, maybe a non- null value.
The processor receiving the data just has to wait for the flag to be signalled as "buffer not empty" before reading the data, and setting the flag back to "buffer empty".
Whether you have primitives supporting this mechanism without relying in a constant flag polling, or not, is tightly dependent on your hardware and operating system.
I've used this mechanism in heterogeneous architectures (processor + co-processor of different architectures/capabilities running different applications), but homogeneous multicore processors are well supported by many RTOSes today, including freeRTOS, and other mechanisms as queues and semaphores/mutexes are probably more appropriated for the synchronization part. Some current SoC's support hardware semaphores and memory-access interrupts that can improve performance greatly.
EDIT:
There is one freeRTOS feature that can assist you here, message buffers. There is one example using ST's STM32H745 dual-core SoC [here] that comes with a companion article [here] written by freeRTOS's Richard Barry.

Is it true that "volatile" in a userspace program tends to indicate a bug?

When I googling about "volatile" and its user space usage, I found mails between Theodore Tso and Linus Torvalds. According to these great masters, use of "volatile" in userspace probably be a bug??Check discussion here
Although they have some explanations, but I really couldn't understand. Could anyone use some simple language explain why they said so? We are not suppose to use volatile in user space??
volatile tells the compiler that every read and write has an observable side effect; thus, the compiler can't make any assumptions about two reads or two writes in a row having the same effect.
For instance, normally, the following code:
int a = *x;
int b = *x;
if (a == b)
printf("Hi!\n");
Could be optimized into:
printf("Hi!\n");
What volatile does is tell the compiler that those values might be coming from somewhere outside of the program's control, so it has to actually read those values and perform the comparison.
A lot of people have made the mistake of thinking that they could use volatile to build lock-free data structures, which would allow multiple threads to share values, and they could observe the effects of those values in other threads.
However, volatile says nothing about how different threads interact, and could be applied to values that could be cached with different values on different cores, or could be applied to values that can't be atomically written in a single operation, and so if you try to write multi-threaded or multi-core code using volatile, you can run into a lot of problems.
Instead, you need to either use locks or some other standard concurrency mechanism to communicate between threads, or use memory barriers, or use C11/C++11 atomic types and atomic operations. Locks ensure that an entire region of code has exclusive access to a variable, which can work if you have a value that is too large, too small, or not aligned to be atomically written in a single operation, while memory barriers and the atomic types and operations provide guarantees about how they work with the CPU to ensure that caches are synchronized or reads and writes happen in particular orders.
Basically, volatile winds up mostly being useful when you're interfacing with a single hardware register, which can vary outside the programs control but may not require any special atomic operations to access. Or it can be used in signal handlers, where because a thread could be interrupted, and the handler run, and then control returned within the same thread, you need to use a volatile value if you want to communicate a flag to the interrupted code.
But if you're doing any kind of sychronization between threads, you should be using locks or some other concurrency primitives provided by a standard library, or really know what you're doing with regards to memory ordering and use memory barriers or atomic operations.

C volatile, and issues with hardware caching

I've read similar answers on this site, and elsewhere, but am still confused in a few circumstances.
I'm aware of what the standard actually guarantees us, I understand the intended use of the keyword, and I'm well aware of the difference between the compiler caching and L1/L2/ect. caching; it's more for curiosity's sake that I understand the other cases.
Say I have a variable declared volatile in C. Four scenarios:
Signal handlers, single threaded (As intended): This is the problem the keyword was meant to solve. My process gets a signal callback from the OS, and I modify some volatile variable out of the normal execution of my process. Since it was declared volatile, the normal process won't store this value in a CPU register, and will always do a load from memory. Even if the signal handler writes to the volatile variable, since the signal handler shares the same address space as the normal process, even if the volatile variable was previously cached in hardware (i.e. L1, L2), we guarantee the main process will load the correct, updated variable. Perfect, everyone is happy.
DMA-transfers, single-threaded: Say the volatile variable is mapped to a region of memory for which a DMA-write is taking place. As before, the compiler won't keep the volatile variable in a CPU register, and will always do a load from memory; however, if that variable exists in hardware cache, then the load request will never reach main memory. If the DMA controller updates MM behind our backs, we'll never get the up-to-date value. In a preemptive OS, we are saved by the fact that eventually, we'll probably be context-switched out, and the next time our process resumes, the cache will be cold and we'll actually have to reload from main memory - so we'll get the correct functionality.. eventually (our own process could potentially swap that cache line out too - but again, we might waste valuable cycles before that happens). Is there standardized HW support or OS support that notifies the hardware caches when main memory is updated via the DMA controller? Or do we have to explicitly flush the cache to guarantee we arm't reading a false value? (Is this even possible in the architectures listed?)
Memory-mapped registers, single-threaded: Same as #2, except the volatile variable is mapped to a memory-mapped register (or an explicit IO-port). I would imagine this is a more difficult problem then #3, since at least the DMA controller will signal the CPU when it's done transferring, which gives the OS or HW a chance to do something.
Mutilthreaded: If I have a volatile variable, is there any guarantee of cache-coherency between multiple threads running on separate physical cores? Like sure, again, the compiler is still issuing load instructions from memory, but if the value is cached in one core's cache, is there any guarantee the same value must exist in the other core's caches? (I would imagine it's not an issue at all for hyperthreading threads on different logical cores on the same physical core, since they share physical cache memory). My overwhelming intuition says no, but thought I'd list the case here anyways.
If possible, differentiate between x64 and ARMv6/7/8 architectures, and kernel vs user land solutions.
For 2 and 3, no there's no standardized way this would work.
Normally when doing DMA transfers one would flush the cache in a platform depending manner. Normally there's quite straight forward instructions for doing that (since now-days the caches are integrated in the CPU).
When accessing memory-mapped registers on the other hand, often the behavior is dependent on the order of writes. For example, suppose you have a UART port and write characters to it — you'll need to make sure that there is an actual write to the port each time you write to it from C.
While it might work with flushing the cache between each write, it's not what one normally does. The normal way (for ARM at least) is to set up the MMU so that writes to certain regions of address space happen uncached and in correct sequence.
This approach can also be used for memory used for DMA transfers; one could for example set up dedicated regions for use as DMA buffers and set up the MMU so that reads and writes to that region happen uncached.
On the other hand the language guarantees that all memory (well what you get from declaring variables or allocating memory using new) will behave in certain ways. It should be no difference between if it's multi-threaded or there's signals involved. Note that the C90 and C99 standards don't mention threads (C11 does), but they are supposed to work this way. The implementation has to make sure that the CPU's and cache are used in a way that is consistent with this (as a consequence, the OS might not be able to schedule different threads on different cores if this can't be accomplished). Consequently you should not need to flush caches in order to share data between threads, but you do need to synchronize threads and of course use volatile qualified data. The same is true for signal handlers even if the implementation happens to schedule them on a different core.

Usage of Volatile in case of Memory mapped Devices?

Following link says that "Access to device registers is always uncached"
http://techpubs.sgi.com/library/dynaweb_docs/hdwr/SGI_Developer/books/DevDrvrO2_PG/sgi_html/ch01.html
My Question is do we ever need volatile when access to device registers which is memory mapped?
The confusion here comes from two mechanisms which have similarities in their goals, but quite distinct mechanisms and levels of implementation.
The link refers to memory mapped I/O regions being configured as ineligible for hardware caching in fast intermediate memory that is used to speed operations compared to accessing slower main memory banks. This is traditionally nearly transparent to software (exceptions being things like modifying code on a machine with distinct instruction and data caches).
In contrast, volatile is used to prohibit an optimizing compiler from performing "software" caching of values by strategically holding them in registers, delaying calculating them until needed, or perhaps never calculating them if un-needed. The basic effect is to inform the compiler that the value may be produced or consumed by a mechanism invisible to its analysis - be that either hardware beyond the present processor core, or a distinct thread or context of execution.
This question is a more procesor-specific version of Why is volatile needed in C?
This is one of the two situations where volatile is mandatory (and it would be nice if compilers could know that).
Any memory location which can change either without your code initiating it (I.e. a memory mapped device register) or without your thread initiating it (i.e. it is changed by another thread or by an interrupt handler) absolutely must be declared as volatile to prevent the compiler optimizing away memory-fetch operations.

Should I mutex lock a single variable?

If a single 32-bit variable is shared between multiple threads, should I put a mutex lock around the variable? For example, suppose 1 thread writes to a 32-bit counter and a 2nd thread reads it. Is there any chance the 2nd thread could read a corrupted value?
I'm working on a 32-bit ARM embedded system. The compiler always seems to align 32-bit variables so they can be read or written with a single instruction. If the 32-bit variable was not aligned, then the read or write would be broken down into multiple instructions and the 2nd thread could read a corrupted value.
Does the answer to this question change if I move to a multiple-core system in the future and the variable is shared between cores? (assuming a shared cache between cores)
Thanks!
A mutex protects you from more than just tearing - for example some ARM implementations use out-of-order execution, and a mutex will include memory (and compiler) barriers that may be necessary for your algorithm's correctness.
It is safer to include the mutex, then figure out a way to optimise it later if it shows as a performance problem.
Note also that if your compiler is GCC-based, you may have access to the GCC atomic builtins.
If all the writing is done from one thread (i.e. other threads are only reading), then no you don't need a mutex. If more than one thread may be writing, then you do.
You don't need mutex.
On 32-bit ARM, single write or read is an atomic operation. (regardless of the number of cores)
Of course, you should declare that variable as volatile.
On a 32-bit system, reads and writes of 32-bit vars are atomic. However, it depends what else you are doing with the variable. E.g. if you maniputale it somehow (e.g. add a value), then this requires a read, manipulation and write. If the CPU and compiler do not support an atomic operation for this, then you will need to use a mutex to protect this multi-operation sequence.
There are other, lock-free techniques which can reduce the need for mutexes.

Resources