I have a multithreaded application where I one producer thread(main) and multiple consumers.
Now from main I want to have some sort of percentage of how far into the work the consumers are. Implementing a counter is easy as the work that is done a loop. However since this loop repeats a couple of thousands of times, maybe even more than a million times. I don`t want to mutex this part. So I went looking into some atomic options of writing to an int.
As far as I understand I can use the builtin atomic functions from gcc:
https://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html
however, it doesn`t have a function for just reading the variable I want to work on.
So basically my question is.
can I read from the variable safely from my producer, as long as I use the atomic builtins for writing to that same variable in the consumer
or
do I need some sort of different function to read from the variable. and what function is that
Define "safely".
If you just use a regular read, on x86, for naturally aligned 32-bit or smaller data, the read is atomic, so you will always read a valid value rather than one containing some bytes written by one thread and some by another. If any of those things are not true (not x86, not naturally aligned, larger than 32 bits...) all bets are off.
That said, you have no guarantee whatsoever that the value read will be particularly fresh, or that the sequence of values seen over multiple reads will be in any particular order. I have seen naive code using volatile to defeat the compiler optimising away the read entirely but no other synchronisation mechanism, literally never see an updated value due to CPU caching.
If any of these things matter to you, and they really should, you should explicitly make the read atomic and use the appropriate memory barriers. The intrinsics you refer to take care of both of these things for you: you could call one of the atomic intrinsics in such a way that there is no side effect other than returning the value:
__sync_val_compare_and_swap(ptr, 0, 0)
or
__sync_add_and_fetch(ptr, 0)
or
__sync_sub_and_fetch(ptr, 0)
or whatever
If your compiler supports it, you can use C11 atomic types. They are introduced in the section 7.17 of the standard, but they are unfortunately optional, so you will have to check whether __STDC_NO_ATOMICS__ is defined to at least throw a meaningful error if it's not supported.
With gcc, you apparently need at least version 4.9, because otherwise the header is missing (here is a SO question about this, but I can't verify because I don't have GCC-4.9).
I'll answer your question, but you should know upfront that atomics aren't cheap. The CPU has to synchronize between cores every time you use atomics, and you won't like the performance results if you use atomics in a tight loop.
The page you linked to lists atomic operations for the writer, but says nothing about how such variables should be read. The answer is that your other CPU cores will "see" the updated values, but your compiler may "cache" the old value in a register or on the stack. To prevent this behavior, I suggest you declare the variable volatile to force your compiler not to cache the old value.
The only safety issue you will encounter is stale data, as described above.
If you try to do anything more complex with atomics, you may run into subtle and random issues with the order atomics are written to by one thread versus the order you see those changes in another thread. Unfortunately you're not using a built-in language feature, and the compiler builtins aren't designed perfectly. If you choose to use these builtins, I suggest you keep your logic very simple.
If I understood the problem, I would not use any atomic variable for the counters. Each worker thread can have a separate counter that it updates locally, the master thread can read the whole array of counters for an approximate snapshot value, so this becomes a 1 consumer 1 producer problem. The memory can be made visible to the master thread, for example, every 5 seconds, by using __sync_synchronize() or similar.
Related
I see code like __CPROVER_fence("RRfence", "RWfence"); being using in projects like the Linux RCU testing and pthread wrappers for CBMC analysis. I looked at the online documentation, but found no text on the strings sent to this CBMC function.
What are the available parameters to __CPROVER_fence?
My take is that it is an annotation/function to denote barriers/fences performed by the real implementation. Ie, it is an analysis stub for the real functionality. Obviously the parameters denote types of barriers, but I have not found a reference to the actual parameters and corresponding barrier types. Ie, "RRFence" is a read fence, meaning reads before this point will complete before reads after this point (as a guess).
Use the source Luke.
All notation are 'fences' to say that some CPU barrier primitive has prevented the hazard type from occurring. So, it is all prior "read/write" will be committed to cache before the subsequent "read/write". The 'function' can accept any combination/permutation of these values.
'cumul' versions are documented in section 4.4.2 of Herding cats..., they are 'transitive' fences which show that all threads/cores will see the ordering.
Indeed as Hasturkun has pointed out, there are eight annotations accepted by CPROVER_fence(). See wikipedia hazards for details as well as other cited papers.
RRFence, not a specific hazard but could cause events to cascade
RWFence, this is an anti-dependency which could be problematic for dependent items.
WRFence, a specific hazard only involving one variable
WWFence, an output hazard could result with only one 'variable'.
WWcumul
RRcumul
RWcumul
WRcumul
The 'cumul' versions are similar to the normal 'fence' with the addition that order is preserved to all cores. For instance, on the ARM CPU, all fences are the 'cumul' type. The straight 'fence' is only for out-of-order issues, such as pipeline and/or write buffers for a single core.
All notation are 'fences' to say that some CPU barrier primitive has prevented the hazard type from occurring. So, it is all prior "read/write" will be committed before the subsequent "read/write". The 'function' can accept any permutation of these values.
Some things like a counter that are not dependent on other consistent values can be fine with only some fences. However, other values/tuples/structs are multi-address (non-atomic to load/store) and could require that multiple value are read/written in a consistent way. A taxonomy of interdependent accesses Table III. Glossary of litmus tests names of Herding cats....
The page, Software Verification for Weak Memory, at CProver is a Rosetta stone for this topic. It refers mainly to the tool 'musketeer', but further reading will show that many concepts are incorporated into the CBMC tool. Even the URL for the page contains 'wmm' which is also in the 'goto-instrument' directory as 'wmm' where this functionality is implemented.
Paper on weak memory - end of section 1 (pg A:5) details incorporation of these models into CBMC. The models are TSO, PSO, RMO, and Power. These can be found in 'cbmc/src/goto-instrument/wmm/wmm.h'.
Software Verification for Weak Memory via Program Transformation describes a fix to PostgreSQL and the instrumentation of the Linux RCU code; ergo the cited open source project probably derived from CBMC weak memory implementors, lending to the possibility there is no online documentation.
The directory cbmc-concurrency and ansi-c have many examples of use of the CPROVER_fence() and other memory modelling primitives.
Sample C prover code to implement pthread_mutex_lock() for analysis,
inline int pthread_mutex_lock(pthread_mutex_t *mutex)
{
__CPROVER_HIDE:;
#ifdef __CPROVER_CUSTOM_BITVECTOR_ANALYSIS
__CPROVER_assert(__CPROVER_get_must(mutex, "mutex-init"),
"mutex must be initialized");
__CPROVER_assert(!__CPROVER_get_may(mutex, "mutex-destroyed"),
"mutex must not be destroyed");
__CPROVER_assert(__CPROVER_get_must(mutex, "mutex-recursive") ||
!__CPROVER_get_may(mutex, "mutex-locked"),
"attempt to lock non-recurisive locked mutex");
__CPROVER_set_must(mutex, "mutex-locked");
__CPROVER_set_may(mutex, "mutex-locked");
__CPROVER_assert(*((__CPROVER_mutex_t *)mutex)!=-1,
"mutex not initialised or destroyed");
#else
__CPROVER_atomic_begin();
__CPROVER_assume(!*((__CPROVER_mutex_t *)mutex));
*((__CPROVER_mutex_t *)mutex)=1;
__CPROVER_atomic_end();
__CPROVER_fence("WWfence", "RRfence", "RWfence", "WRfence",
"WWcumul", "RRcumul", "RWcumul", "WRcumul");
#endif
return 0; // we never fail
}
My understanding is that it is checking for,
only a single lock
lock is initialized before call
lock is not destroyed while held
lock is not re-initialized while held
all weak ordering memory items are resolved (all caches flushed) after the call.
Interestingly, man pthread_mutex_lock() doesn't say anything about CPU synchronization or fences. We have priority inversion, dead lock, etc with mutex/lock programing, but also it has a price on pipeline performance. Indeed on ARM/Linux/glibc, kuser_cmpxchg32_fixup call smp_dmb arm to fulfill this requirement. A similar regression test shows failures of pthread_create() where a write buffer could leave a value in indeterminate state on the initial thread startup, unless a barrier is inserted as pthread_create() doesn't have this synchronization.
It seems this work is fairly recent (by some standards), with papers dated 2013 and Linux RCU commit in 2016. It is possible the authors wish to keep the API fluid. Probably they are more concentrated on the more enjoyable task of proving the provers and didn't have time to document this interface.
Doug Lea's cook book - helpful for memory ordering types versus concrete CPU types.
C++ mapping to CPU - CPU instructions used by C++ memory order types.
Note: Earlier versions of the answer assumed that cache synchronization with non-cached bus masters was covered by this API. That is not the case. System programmers trying to use cbmc need to use other mechanics, if this is of concern.
Let me explain what I mean by data consistency issue. Take following scenario for example
uint16 x,y;
x=0x01FF;
y=x;
Clearly, these variables are 16 bit but if an 8 bit CPU is used with this code, read or write operations would not be atomic. Thereby an interrupt can occur in between and change the value.This is one situation which MIGHT lead to data inconsistency.
Here's another example,
if(x>7) //x is global variable
{
switch(x)
{
case 8://do something
break;
case 10://do something
break;
default: //do default
}
}
In the above excerpt code, if an interrupt is changing the value of x from 8 to 5 after the if statement but before the switch statement,we end up in default case, instead of case 8.
Please note, I'm looking for ways to detect such scenarios (but not solutions)
Are there any tools that can detect such issues in Embedded C?
It is possible for a static analysis tool that is context (thread/interrupt) aware to determine the use of shared data, and that such a tool could recognise specific mechanisms to protect such data (or lack thereof).
One such tool is Polyspace Code Prover; it is very expensive and very complex, and does a lot more besides that described above. Specifically to quote (elided) from the whitepaper here:
With abstract interpretation the following program elements are interpreted in new ways:
[...]
Any global shared data may change at any time in a multitask program, except when protection
mechanisms, such as memory locks or critical sections, have been applied
[...]
It may have improved in the long time since I used it, but one issue I had was that it worked on a lock-access-unlock idiom, where you specified to the tool what the lock/unlock calls or macros were. The problem with that is that the C++ project I worked on used a smarter method where a locking object (mutex, scheduler-lock or interrupt disable for example) locked on instantiation (in the constructor) and unlocked in the destructor so that it unlocked automatically when the object went out of scope (a lock by scope idiom). This meant that the unlock was implicit and invisible to Polyspace. It could however at least identify all the shared data.
Another issue with the tool is that you must specify all thread and interrupt entry points for concurrency analysis, and in my case these were private-virtual functions in task and interrupt classes, again making them invisible to Polyspace. This was solved by conditionally making the entry-points public for the abstract analysis only, but meant that the code being tested does not have the exact semantics of the code to be run.
Of course these are non-problems for C code, and in my experience Polyspace is much more successfully applied to C in any case; you are far less likely to be writing code in a style to suit the tool rather than the tool working with your existing code-base.
There are no such tools as far as I am aware. And that is probably because you can't detect them.
Pretty much every operation in your C code has the potential to get interrupted before it is finished. Less obvious than the 16 bit scenario, there is also this:
uint8_t a, b;
...
a = b;
There is no guarantees that this is atomic! The above assignment may as well translate to multiple assembler instructions, such as 1) load a into register, 2) store register at memory address. You can't know this unless you disassemble the C code and check.
This can create very subtle bugs. Any assumption of the kind "as long as I use 8 bit variables on my 8 bit CPU, I can't get interrupted" is naive. Even if such code would result in atomic operations on a given CPU, the code is non-portable.
The only reliable, fully-portable solution is to use some manner of semaphore. On embedded systems, this could be as simple as a bool variable. Another solution is to use inline assembler, but that can't be ported cross platform.
To solve this, C11 introduced the qualifier _Atomic to the language. However, C11 support among embedded systems compilers is still mediocre.
My reading of the C11 spec with regards to atomic operation ordering suggests that memory_order_seq_cst applies to operations on a specific atomic object.
Mostly, the descriptions are of the form "If a A and B are applied to M, then the order is maintained on M"
My question is specifically what happens if we have two operations that apply to different atomic objects. Something like the following:
atomic_store(&a, 20);
atomic_store(&b, 30);
where a and b are atomic (volatile) types (and atomic_store implies memory_order_seq_cst).
This problem is relevant to a memory mapped situation where the memory map represents the registers of some peripheral.
It's perfectly normal to have requirements about the ordering of the write. Let's say a = 20 is setting up the target for our missile peripheral and setting b = 30 is the launch command. Clearly, we don't want to launch until the missile is targeted properly.
If it makes a difference to anything, this is on ARM Linux with GCC.
Two memory accesses in the same thread are always sequenced, if they happen, and if there is a sequence point between them.
The part for "if they happen" is guaranteed here if the two objects are declared volatile. This forces the compiler to effectively emit the load or store to memory. How the compiler does this, and how he gives guarantees for that, is completely implementation dependent.Read your platform documentation for that.
The sequencing of statements has not much to do with volatile or atomics. It is implied by syntax. A good rule of thumb is that there is a sequence point at each ;, ,, {, }, ?, || and &&. (there are more than that, but things become complicated if you want to reason with them).
Nothing of that is about atomics. These are to guarantee indivisibility of operations and data consistency inbetween threads and with signal handlers. The big deal here is to have provable visibility of side effects of operations. This is relatively involved, but doesn't help you anything when you want to discuss things that happen in the same thread. In the contrary, the "happens before" relation between threads, relies on the "sequenced before" relation within the individual threads.
Context
The function BN_consttime_swap in OpenSSL is a thing of beauty. In this snippet, condition has been computed as 0 or (BN_ULONG)-1:
#define BN_CONSTTIME_SWAP(ind) \
do { \
t = (a->d[ind] ^ b->d[ind]) & condition; \
a->d[ind] ^= t; \
b->d[ind] ^= t; \
} while (0)
…
BN_CONSTTIME_SWAP(9);
…
BN_CONSTTIME_SWAP(8);
…
BN_CONSTTIME_SWAP(7);
The intention is that so as to ensure that higher-level bignum operations take constant time, this function either swaps two bignums or leaves them in place in constant time. When it leaves them in place, it actually reads each word of each bignum, computes a new word that is identical to the old word, and write that result back to the original location.
The intention is that this will take the same time as if the bignums had effectively been swapped.
In this question, I assume a modern, widespread architecture such as those described by Agner Fog in his optimization manuals. Straightforward translation of the C code to assembly (without the C compiler undoing the efforts of the programmer) is also assumed.
Question
I am trying to understand whether the construct above characterizes as a “best effort” sort of constant-time execution, or as perfect constant-time execution.
In particular, I am concerned about the scenario where bignum a is already in the L1 data cache when the function BN_consttime_swap is called, and the code just after the function returns start working on the bignum a right away. On a modern processor, enough instructions can be in-flight at the same time for the copy not to be technically finished when the bignum a is used. The mechanism allowing the instructions after the call to BN_consttime_swap to work on a is memory dependence speculation. Let us assume naive memory dependence speculation for the sake of the argument.
What the question seems to boil down to is this:
When the processor finally detects that the code after BN_consttime_swap read from memory that had, contrary to speculation, been written to inside the function, does it cancel the speculative execution as soon as it detects that the address had been written to, or does it allow itself to keep it when it detects that the value that has been written is the same as the value that was already there?
In the first case, BN_consttime_swap looks like it implements perfect constant-time. In the second case, it is only best-effort constant-time: if the bignums were not swapped, execution of the code that comes after the call to BN_consttime_swap will be measurably faster than if they had been swapped.
Even in the second case, this is something that looks like it could be fixed for the foreseeable future (as long as processors remain naive enough) by, for each word of each of the two bignums, writing a value different from the two possible final values before writing either the old value again or the new value. The volatile type qualifier may need to be involved at some point to prevent an ordinary compiler to over-optimize the sequence, but it still sounds possible.
NOTE: I know about store forwarding, but store forwarding is only a shortcut. It does not prevent a read being executed before the write it is supposed to come after. And in some circumstances it fails, although one would not expect it to in this case.
Straightforward translation of the C code to assembly (without the C compiler undoing the efforts of the programmer) is also assumed.
I know it's not the thrust of your question, and I know that you know this, but I need to rant for a minute. This does not even qualify as a "best effort" attempt to provide constant-time execution. A compiler is licensed to check the value of condition, and skip the whole thing if condition is zero. Obfuscating the setting of condition makes this less likely to happen, but is no guarantee.
Purportedly "constant-time" code should not be written in C, full stop. Even if it is constant time today, on the compilers that you test, a smarter compiler will come along and defeat you. One of your users will use this compiler before you do, and they will not be aware of the risk to which you have exposed them. There are exactly three ways to achieve constant time that I am aware of: dedicated hardware, assembly, or a DSL that generates machine code plus a proof of constant-time execution.
Rant aside, on to the actual architecture question at hand: assuming a stupidly naive compiler, this code is constant time on the µarches with which I am familiar enough to evaluate the question, and I expect it to broadly be true for one simple reason: power. I expect that checking in a store queue or cache if a value being stored matches the value already present and conditionally short-circuiting the store or avoiding dirtying the cache line on every store consumes more energy than would be saved in the rare occasion that you get to avoid some work. However, I am not a CPU designer, and do not presume to speak on their behalf, so take this with several tablespoons of salt, and please consult one before assuming this to be true.
This blog post, and the comments made by the author, Henry, on the subject of this question should be considered as authoritative as anyone should allowed to expect. I will reproduce the latter here for archival:
I didn’t think the case of overwriting a memory location with the same value had a practical use. I think the answer is that in current processors, the value of the store is irrelevant, only the address is important.
Out here in academia, I’ve heard of two approaches to doing memory disambiguation: Address-based, or value-based. As far as I know, current processors all do address-based disambiguation.
I think the current microbenchmark has some evidence that the value isn’t relevant. Many of the cases involve repeatedly storing the same value into the same location (particularly those with offset = 0). These were not abnormally fast.
Address-based schemes uses a store queue and a load queue to track outstanding memory operations. Loads check the store queue to for an address match (Should this load do store-to-load forwarding instead of reading from cache?), while stores check the load queue (Did this store clobber the location of a later load I allowed to execute early?). These checks are based entirely on addresses (where a store and load collided). One advantage of this scheme is that it’s a fairly straightforward extension on top of store-to-load forwarding, since the store queue search is also used there.
Value-based schemes get rid of the associative search (i.e., faster, lower power, etc.), but requires a better predictor to do store-to-load forwarding (Now you have to guess whether and where to forward, rather than searching the SQ). These schemes check for ordering violations (and incorrect forwarding) by re-executing loads at commit time and checking whether their values are correct. In these schemes, if you have a conflicting store (or made some other mistake) that still resulted in the correct result value, it would not be detected as an ordering violation.
Could future processors move to value-based schemes? I suspect they might. They were proposed in the mid-2000s(?) to reduce the complexity of the memory execution hardware.
The idea behind constant-time implementation is not to actually perform everything in constant time. That will never happen on an out-of-order architecture.
The requirement is that no secret information can be revealed by timing analysis.
To prevent this there are basically two requirements:
a) Do not use anything secret as a stop condition for a loop, or as a predicate to a branch. Failing to do so will open you to a branch prediction attack https://eprint.iacr.org/2006/351.pdf
b) Do not use anything secret as an index to memory access. This leads to cache timing attacks http://www.daemonology.net/papers/htt.pdf
As for your code: assuming that your secret is "condition" and possibly the contents of a and b the code is perfectly constant time in the sense that its execution does not depend on the actual contents of a, b and condition. Of course the locality of a and b in memory will affect the execution time of the loop, but not the CONTENTS which are secret.
That is assuming of course condition was computed in a constant time manner.
As for C optimizations: the compiler can only optimize code based on information it knows. If "condition" is truly secret the compiler should not be able to discern it contents and optimize. If it can be deducted from your code then the compiler will most likely make optimization for the 0 case.
I am reviewing some source code and I was wondering if the following was thread safe? I have heard of compiler or CPU instruction/read reordering (would it have something to do with branch prediction?) and the Data->unsafe_variable variable below can be modified at any time by another thread.
My question is: depending on how the compiler/CPU reorder read/writes, would it be possible that the below code would allow the Data->unsafe_variable to be fetched twice? (see 2nd snippet)
Note: I do not worry about the first access, any data can be there as long as it does not pass the 'if', I am just concerned by the possibility that the data would be fetched another time after the 'if'. I was also wondering if the cast into volatile here would help preventing a double fetch?
int function(void* Data) {
// Data is allocated on the heap
// What it contains at this point is not important
size_t _varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
// I do not want Data->unsafe_variable to be fetch once this point reached,
// I want to use the value "supposedly" stored in _varSize
// Would any compiler/CPU reordering would allow it to be double fetched?
size_t size = _varSize - t * q;
function_xy(size);
return TRUE;
}
Basically I do not want the program to behave like this for security reasons:
_varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
size_t size = ((volatile DATA *)Data)->unsafe_variable - t * q;
function10(size);
I am simplifying here and they cannot use mutex. However, would it be safer to use _ReadWriteBarrier() or MemoryBarrier() after the fist line instead of a volatile cast? (VS compiler)
Edit: Giving slightly more context to the code.
The code is broken for many reasons. I'll just point out one of the more subtle ones as others have pointed out the more obvious ones. The object is not volatile. Casting a pointer to a pointer to a volatile object doesn't make the object volatile, it just lies to the compiler.
But there's a much bigger point -- you are going about this totally the wrong way. You are supposed to be checking whether the code is correct, that is, whether it is guaranteed to work. You aren't clever enough, nobody is, to think of every possible way the system might fail to do what you assume it will do. So instead, just don't make those assumptions.
Thinking about things like CPU read re-ordering is totally wrong. You should expect the CPU to do what, and only what, it is required to do. You should definitely not think about specific mechanisms by which it might fail, but only whether it is guaranteed to work.
What you are doing is like trying to figure out if an employee is guaranteed to show up for work by checking if he had his flu shot, checking if he is still alive, and so on. You can't check for, or even think of, every possible way he might fail to show up. So if find that you have to check those kinds of things, then it's not guaranteed and relying on it is broken. Period.
You cannot make reliable code by saying "the CPU doesn't do anything that can break this, so it's okay". You can make reliable code by saying "I make sure my code doesn't rely on anything that isn't guaranteed by the relevant standards."
You are provided with all the tools you need to do the job, including memory barriers, atomic operations, mutexes, and so on. Please use them.
You are not clever enough to think of every way something not guaranteed to work might fail. And you have a plethora of things that are guaranteed to work. Fix this code, and if possible, have a talk with the person who wrote it about using proper synchronization.
This sounds a bit ranty, and I apologize for that. But I've seen too much code that used "tricks" like this that worked perfectly on the test machines but then broke when a new CPU came out, a new compiler, or a new version of the OS. Fixing code like this can be an incredible pain because these hacks hide the actual synchronization requirements. The right answer is almost always to code clearly and precisely what you actually want, rather than to assume that you'll get it because you don't know of any reason you won't.
This is valuable advice from painful experience.
The standard(s) are clear. If any thread may be modifying the object, all accesses, in all threads, must be synchronized, or you have undefined behavior.
The only portable solution for C++ is C++11 atomics, which is available in upcoming VS 2012.
As for C, I do not know if recent C standards bring some portable facilities, I am not following that, but as you are using Visal Studio, it does not matter anyway, as Microsoft is not implementing recent C standards.
Still, if you know you are developing for Visual Studio, you can rely on guarantees provided by this compiler, which apply to both C and C++. Some of them are implicit (accessing volatile variables implies also some memory barriers applied), some are explicit, like using _MemoryBarrier intrinsic.
The whole topic of the memory model is discussed in depth in Lockless Programming Considerations for Xbox 360 and Microsoft Windows, this should give you a good overview. Beware: the topic you are entering is full of hard topics and nasty surprises.
Note: Relying on volatile is not portable, but if you are using old C / C++ standards, there is no portable solution anyway, therefore be prepared to facing the need of reimplementing this for different platform should the need ever arise. When writing portable threaded code, volatile is considered almost useless:
For multi-threaded programming, there two key issues that volatile is often mistakenly thought to address:
atomicity
memory consistency, i.e. the order of a thread's operations as seen by another thread.