Testing lockless buffer copy in C using memory barriers - c

I have a few questions regarding memory barriers.
Say I have the following C code (it will be run both from C++ and C code, so atomics are not possible) that writes an array into another one. Multiple threads may call thread_func(), and I want to make sure that my_str is returned only after it was initialized fully. In this case, it is a given that the last byte of the buffer can't be 0. As such, checking for the last byte as not 0, should suffice.
Due to reordering by compiler/CPU, this can be a problem as the last byte might get written before previous bytes, causing my_str to be returned with a partially copied buffer. So to get around this, I want to use a memory barrier. A mutex will work of course, but would be too heavy for my uses.
Keep in mind that all threads will call thread_func() with the same input, so even if multiple threads call init() a couple of times, it's OK as long as in the end, thread_func() returns a valid my_str, and that all subsequent calls after initialization return my_str directly.
Please tell me if all the following different code approaches work, or if there could be issues in some scenarios as aside from getting the solution to the problem, I'd like to get some more information regarding memory barriers.
__sync_bool_compare_and_swap on last byte. If I understand correctly, any memory store/load would not be reordered, not just the one for the particular variable that is sent to the command. Is that correct? if so, I would expect this to work as all previous writes of the previous bytes should be made before the barrier moves on.
#define STR_LEN 100
static uint8_t my_str[STR_LEN] = {0};
static void init(uint8_t input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN - 1; ++i) {
my_str[i] = input_buf[i];
}
__sync_bool_compare_and_swap(my_str, 0, input_buf[STR_LEN - 1]);
}
const char * thread_func(char input_buf[STR_LEN])
{
if (my_str[STR_LEN - 1] == 0) {
init(input_buf);
}
return my_str;
}
__sync_bool_compare_and_swap on each write. I would expect this to work as well, but to be slower than the first one.
static void init(char input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN; ++i) {
__sync_bool_compare_and_swap(my_str + i, 0, input_buf[i]);
}
}
__sync_synchronize before each byte copy. I would expect this to work as well, but is this slower or faster than (2)? __sync_bool_compare_and_swap is supposed to be a full barrier as well, so which would be preferable?
static void init(char input_buf[STR_LEN])
{
for (int i = 0; i < STR_LEN; ++i) {
__sync_synchronize();
my_str[i] = input_buf[i];
}
}
__sync_synchronize by condition. As I understand it, __sync_synchronize is both a HW and SW memory barrier. As such, since the compiler can't tell the value of use_sync it shouldn't reorder. And the HW reordering will be done only if use_sync is true. is that correct?
static void init(char input_buf[STR_LEN], bool use_sync)
{
for (int i = 0; i < STR_LEN; ++i) {
if (use_sync) {
__sync_synchronize();
}
my_str[i] = input_buf[i];
}
}

GNU C legacy __sync builtins are not recommended for new code, as the manual says.
Use the __atomic builtins which can take a memory-order parameter like C11 stdatomic. But they're still builtins and still work on plain types not declared _Atomic, so using them is like C++20 std::atomic_ref. In C++20, use std::atomic_ref<unsigned char>(my_str[STR_LEN - 1]), but C doesn't provide an equivalent so you'd have to use compiler builtins to hand-roll it.
Just do the last store separately with a release store in the writer, not an RMW, and definitely not a full memory barrier (__sync_synchronize()) between every byte!!! That's way slower than necessary, and defeats any optimization to use memcpy. Also, you need the store of the final byte to be at least RELEASE, not a plain store, so readers can synchronize with it. See also Who's afraid of a big bad optimizing compiler? re: how exactly compilers can break your code if you try to hand-roll lockless code with just barriers, not atomic loads or stores. (It's written for Linux kernel code, where a macro would use *(volatile char*) to hand-roll something close to __atomic_store_n with __ATOMIC_RELAXED`)
So something like
__atomic_store_n(&my_str[STR_LEN - 1], input_buf[STR_LEN - 1], __ATOMIC_RELEASE);
The if (my_str[STR_LEN - 1] == 0) load in thread_func is of course data-race UB when there are concurrent writers.
For safety it needs to be an acquire load, like __atomic_load_n(&my_str[STR_LEN - 1], __ATOMIC_ACQUIRE) == 0, since you need a thread that loads a non-0 value to also see all other stores by another thread that ran init(). (Which did a release-store to that location, creating acquire/release synchronization and guaranteeing a happens-before relationship between these threads.)
See https://preshing.com/20120913/acquire-and-release-semantics/
Writing the same value non-atomically is also UB in ISO C and ISO C++. See Race Condition with writing same value in C++? and others.
But in practice it should be fine except with clang -fsanitize=thread. In theory a DeathStation9000 could implement non-atomic stores by storing value+1 and then subtracting 1, so temporarily there's be a different value in memory. But AFAIK there aren't real compilers that do that. I'd have a look at the generated asm on any new compiler / ISA combination you're trying, just to make sure.
It would be hard to test; the init stuff can only race once per program invocation. But there's no fully safe way to do it that doesn't totally suck for performance, AFAIK. Perhaps doing the init with a cast to _Atomic unsigned char* or typedef _Atomic unsigned long __attribute__((may_alias)) aliasing_atomic_ulong; as a building block for a manual copy loop?
Bonus question: if(use_sync) __sync_synchronize() inside the loop.
Since the compiler can't tell the value of use_sync it shouldn't reorder.
Optimization is possible to asm that works something like if(use_sync) { slow barrier loop } else { no-barrier loop }. This is called "loop unswitching": making two loops and branching once to decide which to run, instead of every iteration. GCC has been able to do that optimization (in some cases) since 3.4. So that defeats your attempt to take advantage of how the compiler would compile to trick it into doing more ordering than the source actually requires.
And the HW reordering will be done only if use_sync is true.
Yes, that part is correct.
Also, inlining and constant-propagation of use_sync could easily defeat this, unless use_sync was a volatile global or something. At that point you might as well just make a separate _Atomic unsigned char array_init_done flag / guard variable.
And you can use it for mutual exclusion by having threads try to set it to 1 with int old = guard.exchange(1), with the winner of the race being the one to run init while they spin-wait (or C++20 .wait(1)) for the guard variable to become 2 or -1 or something, which the winner of the race will set after finishing init.
Have a look at the asm GCC makes for non-constant-initialized static local vars; they check a guard variable with an acquire load, only doing locking to have one thread do the run_once init stuff and the others wait for that result. IIRC there's a Q&A about doing that yourself with atomics.

Related

Clear variable on the stack

Code Snippet:
int secret_foo(void)
{
int key = get_secret();
/* use the key to do highly privileged stuff */
....
/* Need to clear the value of key on the stack before exit */
key = 0;
/* Any half decent compiler would probably optimize out the statement above */
/* How can I convince it not to do that? */
return result;
}
I need to clear the value of a variable key from the stack before returning (as shown in the code).
In case you are curious, this was an actual customer requirement (embedded domain).
You can use volatile (emphasis mine):
Every access (both read and write) made through an lvalue expression of volatile-qualified type is considered an observable side effect for the purpose of optimization and is evaluated strictly according to the rules of the abstract machine (that is, all writes are completed at some time before the next sequence point). This means that within a single thread of execution, a volatile access cannot be optimized out or reordered relative to another visible side effect that is separated by a sequence point from the volatile access.
volatile int key = get_secret();
volatile might be overkill sometimes as it would also affect all the other uses of a variable.
Use memset_s (since C11): http://en.cppreference.com/w/c/string/byte/memset
memset may be optimized away (under the as-if rules) if the object modified by this function is not accessed again for the rest of its lifetime. For that reason, this function cannot be used to scrub memory (e.g. to fill an array that stored a password with zeroes). This optimization is prohibited for memset_s: it is guaranteed to perform the memory write.
int secret_foo(void)
{
int key = get_secret();
/* use the key to do highly privileged stuff */
....
memset_s(&key, sizeof(int), 0, sizeof(int));
return result;
}
You can find other solutions for various platforms/C standards here: https://www.securecoding.cert.org/confluence/display/c/MSC06-C.+Beware+of+compiler+optimizations
Addendum: have a look at this article Zeroing buffer is insufficient which points out other problems (besides zeroing the actual buffer):
With a bit of care and a cooperative compiler, we can zero a buffer — but that's not what we need. What we need to do is zero every location where sensitive data might be stored. Remember, the whole reason we had sensitive information in memory in the first place was so that we could use it; and that usage almost certainly resulted in sensitive data being copied onto the stack and into registers.
Your key value might have been copied into another location (like a register or temporary stack/memory location) by the compiler and you don't have any control to clear that location.
If you go with dynamic allocation you can control wiping that memory and not be bound by what the system does with the stack.
int secret_foo(void)
{
int *key = malloc(sizeof(int));
*key = get_secret();
memset(key, 0, sizeof(int));
// other magical things...
return result;
}
One solution is to disable compiler optimizations for the section of the code that you dont want optimizations:
int secret_foo(void) {
int key = get_secret();
#pragma GCC push_options
#pragma GCC optimize ("O0")
key = 0;
#pragma GCC pop_options
return result;
}

Comparing a volatile array to a non-volatile array

Recently I needed to compare two uint arrays (one volatile and other nonvolatile) and results were confusing, there got to be something I misunderstood about volatile arrays.
I need to read an array from an input device and write it to a local variable before comparing this array to a global volatile array. And if there is any difference i need to copy new one onto global one and publish new array to other platforms. Code is something as blow:
#define ARRAYLENGTH 30
volatile uint8 myArray[ARRAYLENGTH];
void myFunc(void){
uint8 shadow_array[ARRAYLENGTH],change=0;
readInput(shadow_array);
for(int i=0;i<ARRAYLENGTH;i++){
if(myArray[i] != shadow_array[i]){
change = 1;
myArray[i] = shadow_array[i];
}
}
if(change){
char arrayStr[ARRAYLENGTH*4];
array2String(arrayStr,myArray);
publish(arrayStr);
}
}
However, this didn't work and everytime myFunc runs, it comes out that a new message is published, mostly identical to the earlier message.
So I inserted a log line into code:
for(int i=0;i<ARRAYLENGTH;i++){
if(myArray[i] != shadow_array[i]){
change = 1;
log("old:%d,new:%d\r\n",myArray[i],shadow_array[i]);
myArray[i] = shadow_array[i];
}
}
Logs I got was as below:
old:0,new:0
old:8,new:8
old:87,new:87
...
Since solving bug was time critical I solved the issue as below:
char arrayStr[ARRAYLENGTH*4];
char arrayStr1[ARRAYLENGTH*4];
array2String(arrayStr,myArray);
array2String(arrayStr1,shadow_array);
if(strCompare(arrayStr,arrayStr1)){
publish(arrayStr1);
}
}
But, this approach is far from being efficient. If anyone have a reasonable explanation, i would like to hear.
Thank you.
[updated from comments:]
For the volatile part, global array has to be volatile, since other threads are accessing it.
If the global array is volatile, your tracing code could be inaccurate:
for(int i=0;i<ARRAYLENGTH;i++){
if(myArray[i] != shadow_array[i]){
change = 1;
log("old:%d,new:%d\r\n",myArray[i],shadow_array[i]);
myArray[i] = shadow_array[i];
}
}
The trouble is that the comparison line reads myArray[i] once, but the logging message reads it again, and since it is volatile, there's no guarantee that the two reads will give the same value. An accurate logging technique would be:
for (int i = 0; i < ARRAYLENGTH; i++)
{
uintu_t value;
if ((value = myArray[i]) != shadow_array[i])
{
change = 1;
log("old:%d,new:%d\r\n", value, shadow_array[i]);
myArray[i] = shadow_array[i];
}
}
This copies the value used in the comparison and reports that. My gut feel is it is not going to show a difference, but in theory it could.
global array has to be volatile, since other threads are accessing it
As you "nicely" observe declaring an array volatile is not the way to protect it against concurrent read/write access by different threads.
Use a mutex for this. For example by wrapping access to the "global array" into a function which locks and unlocks this mutex. Then only use this function to access the "global array".
References:
Why is volatile not considered useful in multithreaded C or C++ programming?
https://www.kernel.org/doc/Documentation/volatile-considered-harmful.txt
Also for printf()ing unsigned integers use the conversion specifier u not d.
A variable (or Array) should be declared volatile when it may Change outside the current program execution flow. This may happen by concurrent threads or an ISR.
If there is, however, only one who actually writes to it and all others are jsut Readers, then the actual writing code may treat it as being not volatile (even though there is no way to tell teh Compiler to do so).
So if the comparison function is the only Point in the Project where teh gloal Array is actually changed (updated) then there is no Problem with multiple reads. The code can be designed with the (external) knowledge that there will be no Change by an external source, despite of the volatile declaration.
The 'readers', however, do know that the variable (or the array content) may change and won't buffer it reads (e.g by storing the read vlaue in a register for further use), but still the array content may change while they are reading it and the whole information might be inconsistent.
So the suggested use of a mutex is a good idea.
It does not, however, help with the original Problem that the comparison Loop fails, even though nobody is messing with the array from outside.
Also, I wonder why myArray is declared volatile if it is only locally used and the publishing is done by sending out a pointer to ArrayStr (which is a pointer to a non-volatile char (array).
There is no reason why myArray should be volatile. Actually, there is no reason for its existence at all:
Just read in the data, create a temporary tring, and if it differes form the original one, replace the old string and publish it. Well, it's maybe less efficient to always build the string, but it makes the code much shorter and apparently works.
static char arrayStr[ARRAYLENGTH*4]={};
char tempStr[ARRAYLENGTH*4];
array2String(tempStr,shadow_array);
if(strCompare(arrayStr,tempStr)){
strCopy(arrayStr, tempStr);
publish(arrayStr);
}
}

Volatile keyword in C [duplicate]

This question already has answers here:
Why is volatile needed in C?
(18 answers)
Closed 9 years ago.
I am writing program for ARM with Linux environment. its not a low level program, say app level
Can you clarify me what is the difference between,
int iData;
vs
volatile int iData;
Does it have hardware specific impact ?
Basically, volatile tells the compiler "the value here might be changed by something external to this program".
It's useful when you're (for instance) dealing with hardware registers, that often change "on their own", or when passing data to/from interrupts.
The point is that it tells the compiler that each access of the variable in the C code must generate a "real" access to the relevant address, it can't be buffered or held in a register since then you wouldn't "see" changes done by external parties.
For regular application-level code, volatile should never be needed unless (of course) you're interacting with something a lot lower-level.
The volatile keyword specifies that variable can be modified at any moment not by a program.
If we are talking about embedded, then it can be e.g. hardware state register. The value that it contains may be modified by the hardware at any unpredictable moment.
That is why, from the compiler point of view that means that compiler is forbidden to apply optimizations on this variable, as any kind of assumption is wrong and can cause unpredictable result on the program execution.
By making a variable volatile, every time you access the variable, you force the CPU to fetch it from memory rather than from a cache. This is helpful in multithreaded programs where many threads could reuse the value of a variable in a cache. To prevent such reuse ( in multithreaded program) volatile keyword is used. This ensures that any read or write to an volatile variable is stable (not cached)
Generally speaking, the volatile keyword is intended to prevent the compiler from applying any optimizations on the code that assume values of variables cannot change "on their own."
(from Wikipedia)
Now, what does this mean?
If you have a variable that could have its contents changed at any time, usually due to another thread acting on it while you are possibly referencing this variable in a main thread, then you may wish to mark it as volatile. This is because typically a variable's contents can be "depended on" with certainty for the scope and nature of the context in which the variable is used. But if code outside your scope or thread is also affecting the variable, your program needs to know to expect this and query the variable's true contents whenever necessary, more than the normal.
This is a simplification of what is going on, of course, but I doubt you will need to use volatile in most programming tasks.
In the following example, global_data is not explicitly modified. so when the optimization is done, compiler thinks that, it is not going to modified anyway. so it assigns global_data with 0. And uses 0, whereever global_data is used.
But actually global_data updated through some other process/method(say through ptrace ). by using volatile you can force it to read always from memory. so you can get updated result.
#include <stdio.h>
volatile int global_data = 0;
int main()
{
FILE *fp = NULL;
int data = 0;
printf("\n Address of global_data:%x \n", &global_data);
while(1)
{
if(global_data == 0)
{
continue;
}
else if(global_data == 2)
{
;
break;
}
}
return 0;
}
volatile keyword can be used,
when the object is a memory mapped io port.
An 8 bit memory mapped io port at physical address 0x15 can be declared as
char const ptr = (char *) 0x15;
Suppose that we want to change the value at that port at periodic intervals.
*ptr = 0 ;
while(*ptr){
*ptr = 4;//Setting a value
*ptr = 0; // Clearing after setting
}
It may get optimized as
*ptr = 0 ;
while(0){
}
Volatile supress the compiler optimization and compiler assumes that tha value can
be changed at any time even if no explicit code modify it.
Volatile char *const ptr = (volatile char * )0x15;
Used when the object is a modified by ISR.
Sometimes ISR may change tha values used in the mainline codes
static int num;
void interrupt(void){
++num;
}
int main(){
int val;
val = num;
while(val != num)
val = num;
return val;
}
Here the compiler do some optimizations to the while statement.ie the compiler
produce the code it such a way that the value of num will always read form the cpu
registers instead of reading from the memory.The while statement will always be
false.But in actual scenario the valu of num may get changed in the ISR and it will
reflect in the memory.So if the variable is declared as volatile the compiler will know
that the value should always read from the memory
volatile means that variables value could be change any time by any external source. in GCC if we dont use volatile than it optimize the code which is sometimes gives unwanted behavior.
For example if we try to get real time from an external real time clock and if we don't use volatile there then what compiler do is it will always display the value which is stored in cpu register so it will not work the way we want. if we use volatile keyword there then every time it will read from the real time clock so it will serve our purpose....
But as u said you are not dealing with any low level hardware programming then i don't think you need to use volatile anywhere
thanks

Lock-free buffer

In my code I have a buffer, and my code to add data to it is:
bool push_string(file_buffer *cb, const char* message, const unsigned short msglen)
{
unsigned int size = msglen;
if(cb->head >= (cb->size - size))
{
size = cb->size - cb->head - 1;
}
if(size < 1) return false;
char* dest = cb->head += size;
memcpy(dest, message, size);
return (size == msglen);
}
Since I add data from multiple interrupts (which can exempt eachother), I was wondering if this code is thread-safe? I marked 'cb->head' as volatile, but if another interrupt exempts exactly between the increase of 'head' and the asignment to 'dest', things could go wrong.
How can I improve this code to make it safer?
EDIT: Maybe I shouldn't have used the term 'thread-safe' because there are no threads running in parallel, just the possibility of interrupts.
C99 has no concept of threads and thus none for thread-savety either. Only C11 has.
In C99 the only data type that is interrupt safe is sig_atomic_t, but evidently this says nothing about threads either.
Generally you are completely mistaken in attempting to access data structures concurrently, volatile is no guarantee at all that you receive sensible data. There is no guarantee as such of atomicity of any of the operations, even in C11, so you could e.g be in a situation where the lower half of a pointer value is already written but not the upper half. This could give you a completely bogus result. Since such thing would perhaps just happen once in a million or under special circumstances (heavy load e.g) this could lead to bugs that are very difficult to trace.
Don't do that.
C11 gives you new tools to handle such things, in particular atomic operations. It is not completely implemented but many compilers already have extensions that could help you. I have wrapped some of these in the P99 macro package, so with certain compilers you could start to use these features as of today.
Think about signals interrupting signals... if you really need that:
You could block all relevant signals while in push_string().
Another, application dependant possibility might be moving the signal handler code into the main 'thread' (signal handler code just generate 'events' that wake up the main thread of execution). I have not enough information about your app, to say if it is a good choice or not.

Is function call a memory barrier?

Consider this C code:
extern volatile int hardware_reg;
void f(const void *src, size_t len)
{
void *dst = <something>;
hardware_reg = 1;
memcpy(dst, src, len);
hardware_reg = 0;
}
The memcpy() call must occur between the two assignments. In general, since the compiler probably doesn't know what will the called function do, it can't reorder the call to the function to be before or after the assignments. However, in this case the compiler knows what the function will do (and could even insert an inline built-in substitute), and it can deduce that memcpy() could never access hardware_reg. Here it appears to me that the compiler would see no trouble in moving the memcpy() call, if it wanted to do so.
So, the question: is a function call alone enough to issue a memory barrier that would prevent reordering, or is, otherwise, an explicit memory barrier needed in this case before and after the call to memcpy()?
Please correct me if I am misunderstanding things.
The compiler cannot reorder the memcpy() operation before the hardware_reg = 1 or after the hardware_reg = 0 - that's what volatile will ensure - at least as far as the instruction stream the compiler emits. A function call is not necessarily a 'memory barrier', but it is a sequence point.
The C99 standard says this about volatile (5.1.2.3/5 "Program execution"):
At sequence points, volatile objects are stable in the sense that previous accesses are
complete and subsequent accesses have not yet occurred.
So at the sequence point represented by the memcpy(), the volatile access of writing 1 has to occurred, and the volatile access of writing 0 cannot have occurred.
However, there are 2 things I'd like to point out:
Depending on what <something> is, if nothing else is done with the the destination buffer, the compiler might be able to completely remove the memcpy() operation. This is the reason Microsoft came up with the SecureZeroMemory() function. SecureZeroMemory() operates on volatile qualified pointers to prevent optimizing writes away.
volatile doesn't necessarily imply a memory barrier (which is a hardware thing, not just a code ordering thing), so if you're running on a multi-proc machine or certain types of hardware you may need to explicitly invoke a memory barrier (perhaps wmb() on Linux).
Starting with MSVC 8 (VS 2005), Microsoft documents that the volatile keyword implies the appropriate memory barrier, so a separate specific memory barrier call may not be necessary:
http://msdn.microsoft.com/en-us/library/12a04hfd.aspx
Also, when optimizing, the compiler
must maintain ordering among
references to volatile objects as well
as references to other global objects.
In particular,
A write to a volatile object (volatile write) has Release
semantics; a reference to a global or
static object that occurs before a
write to a volatile object in the
instruction sequence will occur before
that volatile write in the compiled
binary.
A read of a volatile object (volatile read) has Acquire semantics;
a reference to a global or static
object that occurs after a read of
volatile memory in the instruction
sequence will occur after that
volatile read in the compiled binary.
As far as I can see your reasoning leading to
the compiler would see no trouble in moving the memcpy call
is correct. Your question is not answered by the language definition, and can only be addressed with reference to specific compilers.
Sorry to not have any more-useful information.
My assumption would be that the compiler never re-orders volatile assignments since it has to assume they must be executed at exactly the position where they occur in the code.
It's probalby going to get optimized, either because the compiler inlines the mecpy call and eliminates the first assignment, or because it gets compiled to RISC code or machine code and gets optimized there.
Here is a slightly modified example, compiled with gcc 7.2.1 on x86-64:
#include <string.h>
static int temp;
extern volatile int hardware_reg;
int foo (int x)
{
hardware_reg = 0;
memcpy(&temp, &x, sizeof(int));
hardware_reg = 1;
return temp;
}
gcc knows that the memcpy() is the same as an assignment, and knows that temp is not accessed anywhere else, so temp and the memcpy() disappear completely from the generated code:
foo:
movl $0, hardware_reg(%rip)
movl %edi, %eax
movl $1, hardware_reg(%rip)
ret

Resources