Volatile keyword in C [duplicate] - c

This question already has answers here:
Why is volatile needed in C?
(18 answers)
Closed 9 years ago.
I am writing program for ARM with Linux environment. its not a low level program, say app level
Can you clarify me what is the difference between,
int iData;
vs
volatile int iData;
Does it have hardware specific impact ?

Basically, volatile tells the compiler "the value here might be changed by something external to this program".
It's useful when you're (for instance) dealing with hardware registers, that often change "on their own", or when passing data to/from interrupts.
The point is that it tells the compiler that each access of the variable in the C code must generate a "real" access to the relevant address, it can't be buffered or held in a register since then you wouldn't "see" changes done by external parties.
For regular application-level code, volatile should never be needed unless (of course) you're interacting with something a lot lower-level.

The volatile keyword specifies that variable can be modified at any moment not by a program.
If we are talking about embedded, then it can be e.g. hardware state register. The value that it contains may be modified by the hardware at any unpredictable moment.
That is why, from the compiler point of view that means that compiler is forbidden to apply optimizations on this variable, as any kind of assumption is wrong and can cause unpredictable result on the program execution.

By making a variable volatile, every time you access the variable, you force the CPU to fetch it from memory rather than from a cache. This is helpful in multithreaded programs where many threads could reuse the value of a variable in a cache. To prevent such reuse ( in multithreaded program) volatile keyword is used. This ensures that any read or write to an volatile variable is stable (not cached)

Generally speaking, the volatile keyword is intended to prevent the compiler from applying any optimizations on the code that assume values of variables cannot change "on their own."
(from Wikipedia)
Now, what does this mean?
If you have a variable that could have its contents changed at any time, usually due to another thread acting on it while you are possibly referencing this variable in a main thread, then you may wish to mark it as volatile. This is because typically a variable's contents can be "depended on" with certainty for the scope and nature of the context in which the variable is used. But if code outside your scope or thread is also affecting the variable, your program needs to know to expect this and query the variable's true contents whenever necessary, more than the normal.
This is a simplification of what is going on, of course, but I doubt you will need to use volatile in most programming tasks.

In the following example, global_data is not explicitly modified. so when the optimization is done, compiler thinks that, it is not going to modified anyway. so it assigns global_data with 0. And uses 0, whereever global_data is used.
But actually global_data updated through some other process/method(say through ptrace ). by using volatile you can force it to read always from memory. so you can get updated result.
#include <stdio.h>
volatile int global_data = 0;
int main()
{
FILE *fp = NULL;
int data = 0;
printf("\n Address of global_data:%x \n", &global_data);
while(1)
{
if(global_data == 0)
{
continue;
}
else if(global_data == 2)
{
;
break;
}
}
return 0;
}

volatile keyword can be used,
when the object is a memory mapped io port.
An 8 bit memory mapped io port at physical address 0x15 can be declared as
char const ptr = (char *) 0x15;
Suppose that we want to change the value at that port at periodic intervals.
*ptr = 0 ;
while(*ptr){
*ptr = 4;//Setting a value
*ptr = 0; // Clearing after setting
}
It may get optimized as
*ptr = 0 ;
while(0){
}
Volatile supress the compiler optimization and compiler assumes that tha value can
be changed at any time even if no explicit code modify it.
Volatile char *const ptr = (volatile char * )0x15;
Used when the object is a modified by ISR.
Sometimes ISR may change tha values used in the mainline codes
static int num;
void interrupt(void){
++num;
}
int main(){
int val;
val = num;
while(val != num)
val = num;
return val;
}
Here the compiler do some optimizations to the while statement.ie the compiler
produce the code it such a way that the value of num will always read form the cpu
registers instead of reading from the memory.The while statement will always be
false.But in actual scenario the valu of num may get changed in the ISR and it will
reflect in the memory.So if the variable is declared as volatile the compiler will know
that the value should always read from the memory

volatile means that variables value could be change any time by any external source. in GCC if we dont use volatile than it optimize the code which is sometimes gives unwanted behavior.
For example if we try to get real time from an external real time clock and if we don't use volatile there then what compiler do is it will always display the value which is stored in cpu register so it will not work the way we want. if we use volatile keyword there then every time it will read from the real time clock so it will serve our purpose....
But as u said you are not dealing with any low level hardware programming then i don't think you need to use volatile anywhere
thanks

Related

Clear variable on the stack

Code Snippet:
int secret_foo(void)
{
int key = get_secret();
/* use the key to do highly privileged stuff */
....
/* Need to clear the value of key on the stack before exit */
key = 0;
/* Any half decent compiler would probably optimize out the statement above */
/* How can I convince it not to do that? */
return result;
}
I need to clear the value of a variable key from the stack before returning (as shown in the code).
In case you are curious, this was an actual customer requirement (embedded domain).
You can use volatile (emphasis mine):
Every access (both read and write) made through an lvalue expression of volatile-qualified type is considered an observable side effect for the purpose of optimization and is evaluated strictly according to the rules of the abstract machine (that is, all writes are completed at some time before the next sequence point). This means that within a single thread of execution, a volatile access cannot be optimized out or reordered relative to another visible side effect that is separated by a sequence point from the volatile access.
volatile int key = get_secret();
volatile might be overkill sometimes as it would also affect all the other uses of a variable.
Use memset_s (since C11): http://en.cppreference.com/w/c/string/byte/memset
memset may be optimized away (under the as-if rules) if the object modified by this function is not accessed again for the rest of its lifetime. For that reason, this function cannot be used to scrub memory (e.g. to fill an array that stored a password with zeroes). This optimization is prohibited for memset_s: it is guaranteed to perform the memory write.
int secret_foo(void)
{
int key = get_secret();
/* use the key to do highly privileged stuff */
....
memset_s(&key, sizeof(int), 0, sizeof(int));
return result;
}
You can find other solutions for various platforms/C standards here: https://www.securecoding.cert.org/confluence/display/c/MSC06-C.+Beware+of+compiler+optimizations
Addendum: have a look at this article Zeroing buffer is insufficient which points out other problems (besides zeroing the actual buffer):
With a bit of care and a cooperative compiler, we can zero a buffer — but that's not what we need. What we need to do is zero every location where sensitive data might be stored. Remember, the whole reason we had sensitive information in memory in the first place was so that we could use it; and that usage almost certainly resulted in sensitive data being copied onto the stack and into registers.
Your key value might have been copied into another location (like a register or temporary stack/memory location) by the compiler and you don't have any control to clear that location.
If you go with dynamic allocation you can control wiping that memory and not be bound by what the system does with the stack.
int secret_foo(void)
{
int *key = malloc(sizeof(int));
*key = get_secret();
memset(key, 0, sizeof(int));
// other magical things...
return result;
}
One solution is to disable compiler optimizations for the section of the code that you dont want optimizations:
int secret_foo(void) {
int key = get_secret();
#pragma GCC push_options
#pragma GCC optimize ("O0")
key = 0;
#pragma GCC pop_options
return result;
}

Comparing a volatile array to a non-volatile array

Recently I needed to compare two uint arrays (one volatile and other nonvolatile) and results were confusing, there got to be something I misunderstood about volatile arrays.
I need to read an array from an input device and write it to a local variable before comparing this array to a global volatile array. And if there is any difference i need to copy new one onto global one and publish new array to other platforms. Code is something as blow:
#define ARRAYLENGTH 30
volatile uint8 myArray[ARRAYLENGTH];
void myFunc(void){
uint8 shadow_array[ARRAYLENGTH],change=0;
readInput(shadow_array);
for(int i=0;i<ARRAYLENGTH;i++){
if(myArray[i] != shadow_array[i]){
change = 1;
myArray[i] = shadow_array[i];
}
}
if(change){
char arrayStr[ARRAYLENGTH*4];
array2String(arrayStr,myArray);
publish(arrayStr);
}
}
However, this didn't work and everytime myFunc runs, it comes out that a new message is published, mostly identical to the earlier message.
So I inserted a log line into code:
for(int i=0;i<ARRAYLENGTH;i++){
if(myArray[i] != shadow_array[i]){
change = 1;
log("old:%d,new:%d\r\n",myArray[i],shadow_array[i]);
myArray[i] = shadow_array[i];
}
}
Logs I got was as below:
old:0,new:0
old:8,new:8
old:87,new:87
...
Since solving bug was time critical I solved the issue as below:
char arrayStr[ARRAYLENGTH*4];
char arrayStr1[ARRAYLENGTH*4];
array2String(arrayStr,myArray);
array2String(arrayStr1,shadow_array);
if(strCompare(arrayStr,arrayStr1)){
publish(arrayStr1);
}
}
But, this approach is far from being efficient. If anyone have a reasonable explanation, i would like to hear.
Thank you.
[updated from comments:]
For the volatile part, global array has to be volatile, since other threads are accessing it.
If the global array is volatile, your tracing code could be inaccurate:
for(int i=0;i<ARRAYLENGTH;i++){
if(myArray[i] != shadow_array[i]){
change = 1;
log("old:%d,new:%d\r\n",myArray[i],shadow_array[i]);
myArray[i] = shadow_array[i];
}
}
The trouble is that the comparison line reads myArray[i] once, but the logging message reads it again, and since it is volatile, there's no guarantee that the two reads will give the same value. An accurate logging technique would be:
for (int i = 0; i < ARRAYLENGTH; i++)
{
uintu_t value;
if ((value = myArray[i]) != shadow_array[i])
{
change = 1;
log("old:%d,new:%d\r\n", value, shadow_array[i]);
myArray[i] = shadow_array[i];
}
}
This copies the value used in the comparison and reports that. My gut feel is it is not going to show a difference, but in theory it could.
global array has to be volatile, since other threads are accessing it
As you "nicely" observe declaring an array volatile is not the way to protect it against concurrent read/write access by different threads.
Use a mutex for this. For example by wrapping access to the "global array" into a function which locks and unlocks this mutex. Then only use this function to access the "global array".
References:
Why is volatile not considered useful in multithreaded C or C++ programming?
https://www.kernel.org/doc/Documentation/volatile-considered-harmful.txt
Also for printf()ing unsigned integers use the conversion specifier u not d.
A variable (or Array) should be declared volatile when it may Change outside the current program execution flow. This may happen by concurrent threads or an ISR.
If there is, however, only one who actually writes to it and all others are jsut Readers, then the actual writing code may treat it as being not volatile (even though there is no way to tell teh Compiler to do so).
So if the comparison function is the only Point in the Project where teh gloal Array is actually changed (updated) then there is no Problem with multiple reads. The code can be designed with the (external) knowledge that there will be no Change by an external source, despite of the volatile declaration.
The 'readers', however, do know that the variable (or the array content) may change and won't buffer it reads (e.g by storing the read vlaue in a register for further use), but still the array content may change while they are reading it and the whole information might be inconsistent.
So the suggested use of a mutex is a good idea.
It does not, however, help with the original Problem that the comparison Loop fails, even though nobody is messing with the array from outside.
Also, I wonder why myArray is declared volatile if it is only locally used and the publishing is done by sending out a pointer to ArrayStr (which is a pointer to a non-volatile char (array).
There is no reason why myArray should be volatile. Actually, there is no reason for its existence at all:
Just read in the data, create a temporary tring, and if it differes form the original one, replace the old string and publish it. Well, it's maybe less efficient to always build the string, but it makes the code much shorter and apparently works.
static char arrayStr[ARRAYLENGTH*4]={};
char tempStr[ARRAYLENGTH*4];
array2String(tempStr,shadow_array);
if(strCompare(arrayStr,tempStr)){
strCopy(arrayStr, tempStr);
publish(arrayStr);
}
}

How to safely convert/copy volatile variable?

volatile char* sevensegment_char_value;
void ss_load_char(volatile char *digits) {
...
int l=strlen(digits);
...
}
ss_load_char(sevensegment_char_value);
In the above example I've got warning from avr-gcc compiler
Warning 6 passing argument 1 of 'strlen' discards 'volatile' qualifier from pointer target type [enabled by default]
So I have to somehow copy the value from volatile to non-volatile var? What is the safe workaround?
There is no such thing like a "built in" Workaround in C. Volatile tells the compiler, that the contents of a variable (or in your case the memory the variable is pointing at) can change without the compiler noticing it and forces the compiler to read the data direct from the data bus rather than using a possibly existing copy in the registers.
Therefore the volatile keyword is used to avoid odd behaviour induced through compiler optimizations. (I can explain this further if you like)
In your case, you have a character buffer declared as volatile. If your program changes the contents of this buffer in a different context like an ISR for example, you have to implement sort of a synchronisation mechanism (like disabling the particular interrupt or so) to avoid inconsistency of data. After aquiring the "lock" (disabling the interrupt) you can copy the data byte by byte to a local (non-volatile) buffer and work on this buffer for the rest of the routine.
If the buffer will not change "outside" of the context of your read accesses I suggest to omit the volatile keyword as there is no use for it.
To judge the correct solution, a little bit more information about your exact use case would be needed.
Standard library routines aren't designed to work on volatile objects. The simplest solution is to read the volatile memory into normal memory before operating on it:
void ss_load_char(volatile char *digits) {
char buf[BUFSIZE];
int i = 0;
for (i = 0; i < BUFSIZE; ++i) {
buf[i] = digits[i];
}
int l=strlen(buf);
...
}
Here BUFSIZE is the size of the area of volatile memory.
Depending on how the volatile memory is configured, there may be routines you are supposed to call to copy out the contents, rather than just using a loop. Note that memcpy won't work as it is not designed to work with volatile memory.
The compiler warning only means that strlen() will not treat your pointer as volatile, i.e. it will maybe cache the pointer in a register when computing the length of your string. I guess, that's ok with you.
In general, volatile means that the compiler will not cache the variable. Look at this example:
extern int flag;
while (flag) { /* loop*/ }
This would loop forever if flag != 0, since the compiler assumes that flag is not changed "from the outside", like a different thread. If you want to wait on the input of some other thread, you must write this:
extern volatile int flag;
while (flag) { /* loop*/ }
Now, the compiler will really look at flag each time the loop loops. This may be must more what we intended in this example.
In answer to your question: if you know what you're doing, just cast the volatile away with int l=strlen((char*)digits).

Why volatile works for setjmp/longjmp

After invoking longjmp(), non-volatile-qualified local objects should not be accessed if their values could have changed since the invocation of setjmp(). Their value in this case is considered indeterminate, and accessing them is undefined behavior.
Now my question is why volatile works in this situation? Wouldn't change in that volatile variable still fail the longjmp? For example, how longjmp will work correctly in the example given below? When the code get backs to setjmp after longjmp, wouldn't the value of local_var be 2 instead of 1?
void some_function()
{
volatile int local_var = 1;
setjmp( buf );
local_var = 2;
longjmp( buf, 1 );
}
setjmp and longjmp clobber registers. If a variable is stored in a register, its value gets lost after a longjmp.
Conversely, if it's declared as volatile, then every time it gets written to, it gets stored back to memory, and every time it gets read from, it gets read back from memory every time. This hurts performance, because the compiler has to do more memory accesses instead of using a register, but it makes the variable's usage safe in the face of longjmping.
The crux is in the optimization in this scenario: The optimizer would naturally expect that a call to a function like setjmp() does not change any local variables, and optimize away read accesses to the variable. Example:
int foo;
foo = 5;
if ( setjmp(buf) != 2 ) {
if ( foo != 5 ) { optimize_me(); longjmp(buf, 2); }
foo = 6;
longjmp( buf, 1 );
return 1;
}
return 0;
An optimizer can optimize away the optimize_me line because foo has been written in line 2, does not need to be read in line 4 and can be assumed to be 5. Additionally, the assignment in line 5 can be removed because foo would be never read again if longjmp was a normal C functon. However, setjmp() and longjmp() disturb the code flow in a way the optimizer cannot account for, breaking this scheme. The correct result of this code would be a termination; with the line optimized away, we have an endless loop.
The most common reason for problems in the absence of a 'volatile' qualifier is that compilers will often place local variables into registers. These registers will almost certainly be used for other things between the setjmp and longjmp. The most practical way to ensure that the use of these registers for other purposes won't cause the variables to hold the wrong values after the longjmp is to cache the values of those registers in the jmp_buf. This works, but has the side effect that there is no way for the compiler to update the contents of the jmp_buf to reflect changes made to the variables after the registers are cached.
If that were the only problem, the result of accessing local variables not declared volatile would be indeterminate, but not Undefined Behavior. There's a problem even with memory variables, though, which thiton alludes to: even if a local variable happens to be allocated on the stack, a compiler would be free to overwrite that variable with something else any time it determines that its value is no longer needed. For example, a compiler could identify that some variables are never 'live' when a routine calls other routines, place those variables shallowest in its stack frame, and pop them before calling other routines. In such a scenario, even though the variables existed in memory when setjmp() is called, that memory might have been reused for something else like holding return address. As such, after the longjmp() is performed, the memory would be considered uninitialized.
Adding a 'volatile' qualifier to a variable's definition causes storage to be reserved exclusively for the use of that variable, for as long as it is within scope. No matter what happens between the setjmp and longjmp, provided control has not left the scope where the variable was declared, nothing is allowed to use that location for any other purpose.

Is function call a memory barrier?

Consider this C code:
extern volatile int hardware_reg;
void f(const void *src, size_t len)
{
void *dst = <something>;
hardware_reg = 1;
memcpy(dst, src, len);
hardware_reg = 0;
}
The memcpy() call must occur between the two assignments. In general, since the compiler probably doesn't know what will the called function do, it can't reorder the call to the function to be before or after the assignments. However, in this case the compiler knows what the function will do (and could even insert an inline built-in substitute), and it can deduce that memcpy() could never access hardware_reg. Here it appears to me that the compiler would see no trouble in moving the memcpy() call, if it wanted to do so.
So, the question: is a function call alone enough to issue a memory barrier that would prevent reordering, or is, otherwise, an explicit memory barrier needed in this case before and after the call to memcpy()?
Please correct me if I am misunderstanding things.
The compiler cannot reorder the memcpy() operation before the hardware_reg = 1 or after the hardware_reg = 0 - that's what volatile will ensure - at least as far as the instruction stream the compiler emits. A function call is not necessarily a 'memory barrier', but it is a sequence point.
The C99 standard says this about volatile (5.1.2.3/5 "Program execution"):
At sequence points, volatile objects are stable in the sense that previous accesses are
complete and subsequent accesses have not yet occurred.
So at the sequence point represented by the memcpy(), the volatile access of writing 1 has to occurred, and the volatile access of writing 0 cannot have occurred.
However, there are 2 things I'd like to point out:
Depending on what <something> is, if nothing else is done with the the destination buffer, the compiler might be able to completely remove the memcpy() operation. This is the reason Microsoft came up with the SecureZeroMemory() function. SecureZeroMemory() operates on volatile qualified pointers to prevent optimizing writes away.
volatile doesn't necessarily imply a memory barrier (which is a hardware thing, not just a code ordering thing), so if you're running on a multi-proc machine or certain types of hardware you may need to explicitly invoke a memory barrier (perhaps wmb() on Linux).
Starting with MSVC 8 (VS 2005), Microsoft documents that the volatile keyword implies the appropriate memory barrier, so a separate specific memory barrier call may not be necessary:
http://msdn.microsoft.com/en-us/library/12a04hfd.aspx
Also, when optimizing, the compiler
must maintain ordering among
references to volatile objects as well
as references to other global objects.
In particular,
A write to a volatile object (volatile write) has Release
semantics; a reference to a global or
static object that occurs before a
write to a volatile object in the
instruction sequence will occur before
that volatile write in the compiled
binary.
A read of a volatile object (volatile read) has Acquire semantics;
a reference to a global or static
object that occurs after a read of
volatile memory in the instruction
sequence will occur after that
volatile read in the compiled binary.
As far as I can see your reasoning leading to
the compiler would see no trouble in moving the memcpy call
is correct. Your question is not answered by the language definition, and can only be addressed with reference to specific compilers.
Sorry to not have any more-useful information.
My assumption would be that the compiler never re-orders volatile assignments since it has to assume they must be executed at exactly the position where they occur in the code.
It's probalby going to get optimized, either because the compiler inlines the mecpy call and eliminates the first assignment, or because it gets compiled to RISC code or machine code and gets optimized there.
Here is a slightly modified example, compiled with gcc 7.2.1 on x86-64:
#include <string.h>
static int temp;
extern volatile int hardware_reg;
int foo (int x)
{
hardware_reg = 0;
memcpy(&temp, &x, sizeof(int));
hardware_reg = 1;
return temp;
}
gcc knows that the memcpy() is the same as an assignment, and knows that temp is not accessed anywhere else, so temp and the memcpy() disappear completely from the generated code:
foo:
movl $0, hardware_reg(%rip)
movl %edi, %eax
movl $1, hardware_reg(%rip)
ret

Resources