EMBEDDED C - Volatile qualifier does not matter in my interrupt routine - c

I am new to embedded C, and I recently watched some videos about volatile qualifier. They all mention about the same things. The scenarios for the use of a volatile qualifier :
when reading or writing a variable in ISR (interrupt service routine)
RTOS application or multi thread (which is not my case)
memory mapped IO (which is also not my case)
My question is that my code does not stuck in the whiletest();function below
when my UART receives data and then triggers the void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart) interrupt function
int test;
int main(void)
{
test = 0;
MX_GPIO_Init();
MX_USART1_UART_Init();
HAL_UART_Receive_IT(&huart1, (uint8_t *)&ch, 1);
while (1)
{
Delay(500);
printf("the main is runing\r\n");
whiletest();
}
}
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if(huart->Instance == USART1)
{
test = 1;
HAL_UART_Receive_IT(&huart1, (uint8_t *)&ch, 1);
}
}
void whiletest(void)
{
int count =0;
while(!test){
count++;
printf("%d\r\n",count);
Delay(2000);
}
}
I use keil IDE and stm32cubeIDE. I learned that the compiler would optimize some instructions away if you choose the o2 or o3 optimization level. Therefore, I chose the o2 level for build option, but it seems no effect on my code. The compiler does not optimize the load instruction away in the while loop and cache the test value 0 in the main function as the videos teach on youtube. It is confusing. In what situation I am supposed to use volatile qualifier while keep my code optimized (o2 or o3 level).
note: I am using stm32h743zi (M7)

volatile informs the compiler that object is side effects prone. It means that it can be changed by something which is not in the program execution path.
As you never call the interrupt routine directly compiler assumes that the test variable will never be 1. You need to tell him (volatile does it) that it may change anyway.
example:
volatile int test;
void interruptHandler(void)
{
test = 1;
}
void foo(void)
{
while(!test);
LED_On();
}
Compiler knows that the test can be changed somehow and always read it in the while loop
foo:
push {r4, lr}
ldr r2, .L10
.L6:
ldr r3, [r2] //compiler reads the value of the test from the memory as it knows that it can change.
cmp r3, #0
beq .L6
bl LED_On
pop {r4, lr}
bx lr
.L10:
.word .LANCHOR0
test:
Without the volatile compiler will assume that the test always will be zero.
foo:
ldr r3, .L10
ldr r3, [r3]
cmp r3, #0
bne .L6
.L7:
b .L7 //dead loop here
.L6:
push {r4, lr}
bl LED_On
pop {r4, lr}
bx lr
.L10:
.word .LANCHOR0
test:
In your code you have to use volatile if the object is changed by something which is not in the program path.

The compiler may only optimize (change) code if the optimized code behaves as if it the optimizer did nothing.
In your case you are calling two functions (Delay and printf) in your while loop. The compiler has no visibility of what these functions do since they appear in a separate compiler unit. The compiler therefore must assume they may change the value of the global variable test and therefore cannot optimize out the check for the value in test. Remove the function calls and the compiler may well optimize out the check for value of test.

Related

if statement, function evaluation & compiler optimization

Just a quick question, to save me from testing things (although I really should test things to be absolutely certain):
Given the following C code:
r1 = fun1();
r2 = fun2();
if (r1 && r2)
{
// do something
}
Variables r1 and r2 are not being used anywhere else in the code, except in the if (...) statement. Will both functions be evaluated? I'm worried that the compiler may optimize the code by eliminating r1 and r2, thus making it look like this:
if (fun1() && fun2())
{
// do something
}
In this case, fun1() will be evaluated first, and if it returns FALSE, then fun2() will not be evaluated at all. This is not what I want, and that's the reason why I'm coding it as in the first code segment.
How can I guarantee that a function will always be evaluated? I thought that this could be done by assigning it to a variable, but I'm concerned about compiler optimization if it sees that this variable is never actually being used later on in the code...
I know this can be achieved by declaring r1 and r2 as volatile, but I'd like to know if there's a more elegant solution.
Any comments on this issue are greatly appreciated, thanks!
Edit: Thanks to all who replied. I've just used my first code snippet on my project (it's an ARM Cortex-M7-based embedded system). It appears that the compiler does not optimize the code in the way I showed above, and both fun1() and fun2() are evaluated (as they should). Furthermore, compiling the code with r1 and r2 declared as volatile produces exactly the same binary output as when r1 and r2 are just normal variables (i.e., the volatile keyword doesn't change the compiler output at all). This reassures me that the first code snippet is in fact a guaranteed way of evaluating both functions prior to processing the if (...) statement that follows.
Assuming the code does not exhibit any undefined behavior, compilers can only perform optimizations that have the same externally viewable behavior as the unoptimized code.
In your example, the two pieces of code do two different things. Specifically, one always calls fun2 while the other calls it conditionally. So you don't need to worry about the first piece of code doing the wrong thing.
The calls will not be optimized out unless the result of the calls can be computed compile time.
void foo()
{
int r1 = fun1();
int r2 = fun2();
if (r1 && r2)
{
func3();
}
}
int fun3() {return 1;}
int fun4() {func();return 0;}
void bar()
{
int r1 = fun3();
int r2 = fun4();
if (r1 && r2)
{
func3();
}
}
foo:
push {r4, lr}
bl fun1
mov r4, r0
bl fun2
cmp r4, #0
cmpne r0, #0
popeq {r4, pc}
pop {r4, lr}
b func3
fun3:
mov r0, #1
bx lr
fun4:
push {r4, lr}
bl func
mov r0, #0
pop {r4, pc}
bar:
b func

Can an x86_64 and/or armv7-m mov instruction be interrupted mid-operation?

I am wondering whenever I would need to use a atomic type or volatile (or nothing special) for a interrupt counter:
uint32_t uptime = 0;
// interrupt each 1 ms
ISR()
{
// this is the only location which writes to uptime
++uptime;
}
void some_func()
{
uint32_t now = uptime;
}
I myself would think that volatile should be enough and guarantee error-free operation and consistency (incremental value until overflow).
But it has come to my mind that maybe a mov instruction could be interrupted mid-operation when moving/setting individual bits, is that possible on x86_64 and/or armv7-m?
for example the mov instruction would begin to execute, set 16 bits, then would be pre-empted, the ISR would run increasing uptime by one (and maybe changing all bits) and then the mov instruction would be continued. I cannot find any material that could assure me of the working order.
Would this also be the same on armv7-m?
Would using sig_atomic_t be the correct solution to always have an error-free and consistent result or would it be "overkill"?
For example the ARM7-M architecture specifies:
In ARMv7-M, the single-copy atomic processor accesses are:
• All byte accesses.
• All halfword accesses to halfword-aligned locations.
• All word accesses to word-aligned locations.
would a assert with &uptime % 8 == 0 be sufficient to guarantee this?
Use volatile. You compiler does not know about interrupts. It may assume, that ISR() function is never called (do you have in your code anywhere a call to ISR?). That means that uptime will never increment, that means that uptime will always be zero, that means that uint32_t now = uptime; may be safely optimized to uint32_t now = 0;. Use volatile uint32_t uptime. That way the optimizer will not optimize uptime away.
Word size. uint32_t variable has 4bytes. So on 32-bit processor it will take 1 instruction to fetch it's value, but on 8-bit processor it will take at least 4 instructions (in general). So on 32-bit processor you don't need to disable interrupt before loading the value of uptime, because interrupt routine will start executing before or after the current instruction is executed on the processor. Processor can't branch to interrupt routing mid-instruction, that's not possible. On 8-bit processor we need to disable interrupts before reading from uptime, like:
DisableInterrupts();
uint32_t now = uptime;
EnableInterrupts();
C11 atomic types. I have never seen a real embedded code which uses them, still waiting, I see volatile everywhere. This is dependent on your compiler, because the compiler implements atomic types and atomic_* functions. This is compiler dependent. Are 100% sure that when reading from atomic_t variable your compiler will disable ISR() interrupt? Inspect the assembly output generated from atomic_* calls, you will know for sure. This was a good read. I would expect atomic* C11 types to work for concurrency between multiple threads, which can switch execution context anytime. Using it between interrupt and normal context may block your cpu, because once you are in IRQ you get back to normal execution only after servicing that IRQ, ie. some_func sets mutex up to read uptime, then IRQ fires up and IRQ will check in a loop if mutex is down, this will result in endless loop.
See for example HAL_GetTick() implementation, from here, removed __weak macro and substituted __IO macro by volatile, those macros are defined in cmsis file:
static volatile uint32_t uwTick;
void HAL_IncTick(void)
{
uwTick++;
}
uint32_t HAL_GetTick(void)
{
return uwTick;
}
Typically HAL_IncTick() is called from systick interrupt each 1ms.
You have to read the documentation for each separate core and/or chip. x86 is a completely separate thing from ARM, and within both families each instance may vary from any other instance, can be and should expect to be completely new designs each time. Might not be but from time to time are.
Things to watch out for as noted in the comments.
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
++uptime;
}
void some_func ( void )
{
uint32_t now = uptime;
}
On my machine with the tool I am using today:
Disassembly of section .text:
00000000 <ISR>:
0: e59f200c ldr r2, [pc, #12] ; 14 <ISR+0x14>
4: e5923000 ldr r3, [r2]
8: e2833001 add r3, r3, #1
c: e5823000 str r3, [r2]
10: e12fff1e bx lr
14: 00000000 andeq r0, r0, r0
00000018 <some_func>:
18: e12fff1e bx lr
Disassembly of section .bss:
00000000 <uptime>:
0: 00000000 andeq r0, r0, r0
this could vary, but if you find a tool on one machine one day that builds a problem then you can assume it is a problem. So far we are actually okay. because some_func is dead code the read is optimized out.
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
++uptime;
}
uint32_t some_func ( void )
{
uint32_t now = uptime;
return(now);
}
fixed
00000000 <ISR>:
0: e59f200c ldr r2, [pc, #12] ; 14 <ISR+0x14>
4: e5923000 ldr r3, [r2]
8: e2833001 add r3, r3, #1
c: e5823000 str r3, [r2]
10: e12fff1e bx lr
14: 00000000 andeq r0, r0, r0
00000018 <some_func>:
18: e59f3004 ldr r3, [pc, #4] ; 24 <some_func+0xc>
1c: e5930000 ldr r0, [r3]
20: e12fff1e bx lr
24: 00000000 andeq r0, r0, r0
Because of cores like mips and arm tending to have data aborts by default for unaligned accesses we might assume the tool will not generate an unaligned address for such a clean definition. But if we were to talk about packed structs, that is another story you told the compiler to generate an unaligned access and it will...If you want to feel safe remember a "word" in ARM is 32 bits so you can assert address of variable AND 3.
x86 one would also assume a clean definition like that would result in an aligned variable, but x86 doesnt have the data fault issue by default and as a result compilers are a bit more free...focusing on arm as I think that is your question.
Now if I do this:
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
if(uptime)
{
uptime=uptime+1;
}
else
{
uptime=uptime+5;
}
}
uint32_t some_func ( void )
{
uint32_t now = uptime;
return(now);
}
00000000 <ISR>:
0: e59f2014 ldr r2, [pc, #20] ; 1c <ISR+0x1c>
4: e5923000 ldr r3, [r2]
8: e3530000 cmp r3, #0
c: 03a03005 moveq r3, #5
10: 12833001 addne r3, r3, #1
14: e5823000 str r3, [r2]
18: e12fff1e bx lr
1c: 00000000 andeq r0, r0, r0
and adding volatile
00000000 <ISR>:
0: e59f3018 ldr r3, [pc, #24] ; 20 <ISR+0x20>
4: e5932000 ldr r2, [r3]
8: e3520000 cmp r2, #0
c: e5932000 ldr r2, [r3]
10: 12822001 addne r2, r2, #1
14: 02822005 addeq r2, r2, #5
18: e5832000 str r2, [r3]
1c: e12fff1e bx lr
20: 00000000 andeq r0, r0, r0
the two reads results in two reads. now there is a problem here if the read-modify-write can get interrupted, but we assume since this is an ISR it cant? If you were to read a 7, add a 1 then write an 8 if you were interrupted after the read by something that is also modifying uptime, that modification has limited life, its modification happens, say a 5 is written, then this ISR writes an 8 on top if it.
if a read-modify-write were in the interruptable code then the isr could get in there and it probably wouldnt work the way you wanted. This is two readers two writers you want one responsible for writing a shared resource and the others read-only. Otherwise you need a lot more work not built into the language.
Note on an arm machine:
typedef int __sig_atomic_t;
...
typedef __sig_atomic_t sig_atomic_t;
so
typedef unsigned int uint32_t;
typedef int sig_atomic_t;
volatile sig_atomic_t uptime = 0;
void ISR ( void )
{
if(uptime)
{
uptime=uptime+1;
}
else
{
uptime=uptime+5;
}
}
uint32_t some_func ( void )
{
uint32_t now = uptime;
return(now);
}
Isnt going to change the result. At least not on that system with that define, need to examine other C libraries and/or sandbox headers to see what they define, or if you are not careful (happens often) the wrong headers are used, the x6_64 headers are used to build arm programs with the cross compiler. seen gcc and llvm make host vs target mistakes.
going back to a concern though which based on your comments you appear to already understand
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
if(uptime)
{
uptime=uptime+1;
}
else
{
uptime=uptime+5;
}
}
void some_func ( void )
{
while(uptime&1) continue;
}
This was pointed out in the comments even though you have one writer and one reader
00000020 <some_func>:
20: e59f3018 ldr r3, [pc, #24] ; 40 <some_func+0x20>
24: e5933000 ldr r3, [r3]
28: e2033001 and r3, r3, #1
2c: e3530000 cmp r3, #0
30: 012fff1e bxeq lr
34: e3530000 cmp r3, #0
38: 012fff1e bxeq lr
3c: eafffffa b 2c <some_func+0xc>
40: 00000000 andeq r0, r0, r0
It never goes back to read the variable from memory, and unless someone corrupts the register in an event handler, this can be an infinite loop.
make uptime volatile:
00000024 <some_func>:
24: e59f200c ldr r2, [pc, #12] ; 38 <some_func+0x14>
28: e5923000 ldr r3, [r2]
2c: e3130001 tst r3, #1
30: 012fff1e bxeq lr
34: eafffffb b 28 <some_func+0x4>
38: 00000000 andeq r0, r0, r0
now the reader does a read every time.
same issue here, not in a loop, no volatile.
00000020 <some_func>:
20: e59f302c ldr r3, [pc, #44] ; 54 <some_func+0x34>
24: e5930000 ldr r0, [r3]
28: e3500005 cmp r0, #5
2c: 0a000004 beq 44 <some_func+0x24>
30: e3500004 cmp r0, #4
34: 0a000004 beq 4c <some_func+0x2c>
38: e3500001 cmp r0, #1
3c: 03a00006 moveq r0, #6
40: e12fff1e bx lr
44: e3a00003 mov r0, #3
48: e12fff1e bx lr
4c: e3a00007 mov r0, #7
50: e12fff1e bx lr
54: 00000000 andeq r0, r0, r0
uptime can have changed between tests. volatile fixes this.
so volatile is not the universal solution, having the variable be used for one way communication is ideal, need to communicate the other way use a separate variable, one writer one or more readers per.
you have done the right thing and consulted the documentation for your chip/core
So if aligned (in this case a 32 bit word) AND the compiler chooses the right instruction then the interrupt wont interrupt the transaction. If it is an LDM/STM though you should read the documentation (push and pop are also LDM/STM pseudo instructions) in some cores/architectures those can be interrupted and restarted as a result we are warned about those situations in arm documentation.
short answer, add volatile, and make it so there is only one writer per variable. and keep the variable aligned. (and read the docs each time you change chips/cores, and periodically disassemble to check the compiler is doing what you asked it to do). doesnt matter if it is the same core type (another cortex-m3) from the same vendor or different vendors or if it is some completely different core/chip (avr, msp430, pic, x86, mips, etc), start from zero, get the docs and read them, check the compiler output.
TL:DR: Use volatile if an aligned uint32_t is naturally atomic (it is on x86 and ARM). Why is integer assignment on a naturally aligned variable atomic on x86?. Your code will technically have C11 undefined behaviour, but real implementations will do what you want with volatile.
Or use C11 stdatomic.h with memory_order_relaxed if you want to tell the compiler exactly what you mean. It will compile to the same asm as volatile on x86 and ARM if you use it correctly.
(But if you actually need it to run efficiently on single-core CPUs where load/store of an aligned uint32_t isn't atomic "for free", e.g. with only 8-bit registers, you might rather disable interrupts instead of having stdatomic fall back to using a lock to serialize reads and writes of your counter.)
Whole instructions are always atomic with respect to interrupts on the same core, on all CPU architectures. Partially-completed instructions are either completed or discarded (without committing their stores) before servicing an interrupt.
For a single core, CPUs always preserve the illusion of running instructions one at a time, in program order. This includes interrupts only happening on the boundaries between instructions. See #supercat's single-core answer on Can num++ be atomic for 'int num'?. If the machine has 32-bit registers, you can safely assume that a volatile uint32_t will be loaded or stored with a single instruction. As #old_timer points out, beware of unaligned packed-struct members on ARM, but unless you manually do that with __attribute__((packed)) or something, the normal ABIs on x86 and ARM ensure natural alignment.
Multiple bus transactions from a single instruction for unaligned operands or narrow busses only matters for concurrent read+write, either from another core or a non-CPU hardware device. (e.g. if you're storing to device memory).
Some long-running x86 instructions like rep movs or vpgatherdd have well-defined ways to partially complete on exceptions or interrupts: update registers so re-running the instruction does the right thing. But other than that, an instruction has either run or it hasn't, even a "complex" instruction like a memory-destination add that does a read/modify/write.) IDK if anyone's ever proposed a CPU that could suspend/result multi-step instructions across interrupts instead of cancelling them, but x86 and ARM are definitely not like that. There are lots of weird ideas in computer-architecture research papers. But it seems unlikely that it would be worth the keeping all the necessary microarchitectural state to resume in the middle of a partially-executed instruction instead of just re-decoding it after returning from an interrupt.
This is why AVX2 / AVX512 gathers always need a gather mask even when you want to gather all the elements, and why they destroy the mask (so you have to reset it to all-ones again before the next gather).
In your case, you only need the store (and load outside the ISR) to be atomic. You don't need the whole ++uptime to be atomic. You can express this with C11 stdatomic like this:
#include <stdint.h>
#include <stdatomic.h>
_Atomic uint32_t uptime = 0;
// interrupt each 1 ms
void ISR()
{
// this is the only location which writes to uptime
uint32_t tmp = atomic_load_explicit(&uptime, memory_order_relaxed);
// the load doesn't even need to be atomic, but relaxed atomic is as cheap as volatile on machines with wide-enough loads
atomic_store_explicit(&uptime, tmp+1, memory_order_relaxed);
// some x86 compilers may fail to optimize to add dword [uptime],1
// but uptime+=1 would compile to LOCK ADD (an atomic increment), which you don't want.
}
// MODIFIED: return the load result
uint32_t some_func()
{
// this does need to be an atomic load
// you typically get that by default with volatile, too
uint32_t now = atomic_load_explicit(&uptime, memory_order_relaxed);
return now;
}
volatile uint32_t compiles to the exact same asm on x86 and ARM. I put the code on the Godbolt compiler explorer. This is what clang6.0 -O3 does for x86-64. (With -mtune=bdver2, it uses inc instead of add, but it knows that memory-destination inc is one of the few cases where inc is still worse than add on Intel :)
ISR: # #ISR
add dword ptr [rip + uptime], 1
ret
some_func: # #some_func
mov eax, dword ptr [rip + uptime]
ret
inc_volatile: // void func(){ volatile_var++; }
add dword ptr [rip + volatile_var], 1
ret
gcc uses separate load/store instructions for both volatile and _Atomic, unfortunately.
# gcc8.1 -O3
mov eax, DWORD PTR uptime[rip]
add eax, 1
mov DWORD PTR uptime[rip], eax
At least that means there's no downside to using _Atomic or volatile _Atomic on either gcc or clang.
Plain uint32_t without either qualifier is not a real option, at least not for the read side. You probably don't want the compiler to hoist get_time() out of a loop and use the same time for every iteration. In cases where you do want that, you could copy it to a local. That could result in extra work for no benefit if the compiler doesn't keep it in a register, though (e.g. across function calls it's easiest for the compiler to just reload from static storage). On ARM, though, copying to a local may actually help because then it can reference it relative to the stack pointer instead of needing to keep a static address in another register, or regenerate the address. (x86 can load from static addresses with a single large instruction, thanks to its variable-length instruction set.)
If you want any stronger memory-ordering, you can use atomic_signal_fence(memory_order_release); or whatever (signal_fence not thread_fence) to tell the compiler you only care about ordering wrt. code running asynchronously on the same CPU ("in the same thread" like a signal handler), so it will only have to block compile-time reordering, not emit any memory-barrier instructions like ARM dmb.
e.g. in the ISR:
uint32_t tmp = atomic_load_explicit(&idx, memory_order_relaxed);
tmp++;
shared_buf[tmp] = 2; // non-atomic
// Then do a release-store of the index
atomic_signal_fence(memory_order_release);
atomic_load_explicit(&idx, tmp, memory_order_relaxed);
Then it's safe for a reader to load idx, run atomic_signal_fence(memory_order_acquire);, and read from shared_buf[tmp] even if shared_buf is not _Atomic. (Assuming you took care of wraparound issues and so on.)
volatile is only sugestion for compiler, where value should be stored. typically with this flat this is stored in any CPU register. But if compiler will not take this space because it is busy for other operation, it will be ignored and traditionally stored in memory. this is the main rule.
then let's look at the architecture. all native CPU instruction with all native types are atomic. But many operation can be splited into two steps, when value should be copied from memory to memory. in that situation can be done some cpu interrupt. but don't worry, it is normal. when value will not be stored into prepared variable, you can understand this as not fully commited operation.
problem is when you use words longer than implemented in CPU, for example u32bit in 16 or 8 bit processor. In that situation reading and writting value will be splited into many steps. then it will be sure, then some part of value will be stored, other not, and you will get wrong damaged value.
in this scenario it is not allways good aproach for disabling interrupts, because this can take big time. of course you can use locking, but this can do the same.
but you can make some structure, with first field as data, and second field as counter that suit in architecture. then when you reading that value, you can at first get counter as first value, then get value, and at last get counter second time. when counter differs, you should repeat this process.
of course it doesn't guarantee all will be proper, but typically it saves a lot of cpu cycles. for example you will use 16bit additional counter for verification, it is 65536 values. then when you read this second counter first time, you main process must be frozen for very long cycles, in this example it should be 65536 missed interrupts, for making bug for main counter or any other stored value.
of course if you using 32bit value in 32bit architecture, it is not a problem, you don't need specially secure that operation, independed or architecture. of course except if architecture do all its operation as atomic :)
example code:
struct
{
ucint32_t value; //us important value
int watchdog; //for value secure, long platform depended, usually at least 32bits
} SecuredCounter;
ISR()
{
// this is the only location which writes to uptime
++SecuredCounter.value;
++SecuredCounter.watchdog;
}
void some_func()
{
uint32_t now = Read_uptime;
}
ucint32_t Read_uptime;
{
int secure1; //length platform dependee
ucint32_t value;
int secure2;
while (1) {
longint secure1=SecuredCounter.watchdog; //read first
ucint32_t value=SecuredCounter.value; //read value
longint secure2=SecuredCounter.watchdog; //read second, should be as first
if (secure1==secure2) return value; //this is copied and should be proper
};
};
Different approach is to make two identical counters, you should increase it both in single function. In read function you copy both values to local variables, and compare it is identical. If is, then value is proper and return single one. If differs, repeat reading. Don't worry, if values differs, then you reading function has been interrupted. It is very fiew chance, after repeated reading it will happen again. But if it will happen, it is no chance it will be stalled loop.

Kernel breaks on adding new code (that never runs)

I'm trying to add some logic at boundaries between userspace and kernelspace particularly on the ARM architecture.
One such boundary appears to be the vector_swi routine implemented in arch/arm/kernel/entry-common.S. Right now, I have most of my code written in a C function which I would like to call somewhere at the start of vector_swi.
Thus, I did the following:
ENTRY(vector_swi)
sub sp, sp, #S_FRAME_SIZE
stmia sp, {r0 - r12} # Calling r0 - r12
ARM( add r8, sp, #S_PC )
ARM( stmdb r8, {sp, lr}^ ) # Calling sp, lr
THUMB( mov r8, sp )
THUMB( store_user_sp_lr r8, r10, S_SP ) # calling sp, lr
mrs r8, spsr # called from non-FIQ mode, so ok.
str lr, [sp, #S_PC] # Save calling PC
str r8, [sp, #S_PSR] # Save CPSR
str r0, [sp, #S_OLD_R0] # Save OLD_R0
zero_fp
#ifdef CONFIG_BTM_BOUNDARIES
bl btm_entering_kernelspace # <--- My function
#endif
When the contents of my function are as follows everything works fine:
static int btm_enabled = 0;
asmlinkage inline void btm_entering_kernelspace(void)
{
int cpu;
int freq;
struct acpu_level *level;
if(!btm_enabled) {
return;
}
cpu = smp_processor_id();
freq = acpuclk_krait_get_rate(cpu);
(void) cpu;
(void) freq;
(void) level;
}
However, when I add some additional code, the kernel enters into a crash-reboot loop.
static int btm_enabled = 0;
asmlinkage inline void btm_entering_kernelspace(void)
{
int cpu;
int freq;
struct acpu_level *level;
if(!btm_enabled) {
return;
}
cpu = smp_processor_id();
freq = acpuclk_krait_get_rate(cpu);
(void) cpu;
(void) freq;
(void) level;
// --------- Added code ----------
for (level = drv.acpu_freq_tbl; level->speed.khz != 0; level++) {
if(level->speed.khz == freq) {
break;
}
}
}
Although the first instinct is to blame the logic of the added code, please note that none of it should ever execute since btm_enabled is 0.
I have double-checked and triple-checked to make sure btm_enabled is 0 by adding a sysfs entry to print out the value of the variable (with the added code removed).
Could someone explain what is going on here or what I'm doing wrong?
The first version will probably compile to just a return instruction as it has no side effect. The second needs to load btm_enabled and in the process overwrites one or two system call arguments.
When calling a C function from assembly language you need to ensure that registers that may be modified do not contain needed information.
To solve your specific problem, you could update your code to read:
#ifdef CONFIG_BTM_BOUNDARIES
stmdb sp!, {r0-r3, r12, lr} # <--- New instruction
bl btm_entering_kernelspace # <--- My function
ldmia sp!, {r0-r3, r12, lr} # <--- New instruction
#endif
The new instructions store registers r0-r3, r12 and lr onto the stack and restore them after your function call. These are the only registers a C function is allowed to modify, saving r12 here is unnecessary here is it's value is not used, but doing so keeps the stack 8-byte aligned as required by the ABI.

ARM Cortex-M3 crashed with --use_frame_pointer caused by FreeRTOS's TimerTask

Our current project includes FreeRTOS, and I added --use_frame_pointer to Keil uVision's ARMGCC compiler option. But after loading firmware into STM32F104 chip, then runs it, it crashed. Without --use_frame_pointer, everything is OK.
The hard fault handler shows that faultStackAddress is 0x40FFFFDC, which points to a reserved area. Does anyone has any idea of this error? Thanks a lot.
#if defined(__CC_ARM)
__asm void HardFault_Handler(void)
{
TST lr, #4
ITE EQ
MRSEQ r0, MSP
MRSNE r0, PSP
B __cpp(Hard_Fault_Handler)
}
#else
void HardFault_Handler(void)
{
__asm("TST lr, #4");
__asm("ITE EQ");
__asm("MRSEQ r0, MSP");
__asm("MRSNE r0, PSP");
__asm("B Hard_Fault_Handler");
}
#endif
void Hard_Fault_Handler(uint32_t *faultStackAddress)
{
}
I stepped into each line of code, and the crash happened in below function in FreeRTOS's port.c after I called vTaskDelete(NULL);
void vPortYieldFromISR( void )
{
/* Set a PendSV to request a context switch. */
portNVIC_INT_CTRL_REG = portNVIC_PENDSVSET_BIT;
}
But seems like this is not the root cause, because when I deleted vTaskDelete(NULL), crash still happened.
[update on Jan 8] sample code
#include "FreeRTOSConfig.h"
#include "FreeRTOS.h"
#include "task.h"
#include <stm32f10x.h>
void crashTask(void *param)
{
unsigned int i = 0;
/* halt the hardware. */
while(1)
{
i += 1;
}
vTaskDelete(NULL);
}
void testCrashTask()
{
xTaskCreate(crashTask, (const signed char *)"crashTask", configMINIMAL_STACK_SIZE, NULL, 1, NULL);
}
void Hard_Fault_Handler(unsigned int *faultStackAddress);
/* The fault handler implementation calls a function called Hard_Fault_Handler(). */
#if defined(__CC_ARM)
__asm void HardFault_Handler(void)
{
TST lr, #4
ITE EQ
MRSEQ r0, MSP
MRSNE r0, PSP
B __cpp(Hard_Fault_Handler)
}
#else
void HardFault_Handler(void)
{
__asm("TST lr, #4");
__asm("ITE EQ");
__asm("MRSEQ r0, MSP");
__asm("MRSNE r0, PSP");
__asm("B Hard_Fault_Handler");
}
#endif
void Hard_Fault_Handler(unsigned int *faultStackAddress)
{
int i = 0;
while(1)
{
i += 1;
}
}
void nvicInit(void)
{
NVIC_PriorityGroupConfig(NVIC_PriorityGroup_4);
#ifdef VECT_TAB_RAM
NVIC_SetVectorTable(NVIC_VectTab_RAM, 0x0);
#else
NVIC_SetVectorTable(NVIC_VectTab_FLASH, 0x0);
#endif
}
int main()
{
nvicInit();
testCrashTask();
vTaskStartScheduler();
}
/* For now, the stack depth of IDLE has 88 left. if want add func to here,
you should increase it. */
void vApplicationIdleHook(void)
{ /* ATTENTION: all funcs called within here, must not be blocked */
//workerProbe();
}
void debugSendTraceInfo(unsigned int taskNbr)
{
}
When crash happened, in HardFault_Handler, Keil MDK IDE reports below fault information. I looked the STKERR error, which mainly means that stack pointer is corrupted. But I really have no idea why it is corrupted. Without --use_frame_pointer, everything works OK.
[update on Jan 13]
I did further investigation. Seems like the crash is caused by FreeRTOS's default TimerTask. If I comment out the xTimerCreateTimerTask() in vTaskStartScheduler() function(tasks.c), the crash does not happen.
Another odd thing is that if I debug it and step into the TimerTask's portYIELD_WITHIN_API() function call, then resume the application. It does not crash. So my guess is that this might due to certain time sequence. But I could not find the root cause of it.
Any thoughts? Thanks.
I ran into a similar problem in my project. It looks that armcc --use_frame_pointer tends to generate broken function epilogues. An example of generated code:
; function prologue
stmdb sp!, {r3, r4, r5, r6, r7, r8, r9, r10, r11, lr}
add.w r11, sp, #36
; ... actual function code ...
; function epilogue
mov sp, r11
; <--- imagine an interrupt happening here
sub sp, #36
ldmia.w sp!, {r3, r4, r5, r6, r7, r8, r9, r10, r11, pc}
This code actually seems to break the constraint from AAPCS section 5.2.1.1:
A process may only access (for reading or writing) the closed interval of the entire stack delimited by [SP, stack-base – 1] (where SP is the value of register r13).
Now, on Cortex-M3, when an exception/interrupt arrives, partial register set is automatically pushed onto the current process' stack before jumping into the exception handler. If an exception is raised between the mov and sub, that partial register set will overwrite the registers stored by the function prologue's stmdb instruction, thus corrupting the state of the caller function.
Unfortunately, there doesn't seem to be any easy solution. None of the optimization settings seems to fix this code that looks like it can be easily fixed (coerced into sub sp, r11, #36). It seems that --use_frame_pointer is too broken to work on Cortex-M3 with multi-threaded code. At least on ARMCC 5.05u1, I didn't have the chance to check other versions.
If using a different compiler is an option for you, arm-none-eabi-gcc -fno-omit-frame-pointer seems to emit saner function epilogues, though.

Is there any gcc compiler primitive for "svc"?

I'm working on writing a program running on Cortex-m3.
At first I wrote an assembly file which executes 'svc'.
svc:
svc 0
bx lr
I decided to use gcc's inline asm, so I wrote it as follows, but the svc function was not inlined.
__attribute__((naked))
int svc(int no, ...)
{
(void)no;
asm("svc 0\n\tbx lr");
}
int f() {
return svc(0,1,2);
}
------------------ generated assembly ------------------
svc:
svc 0
bx lr
f:
mov r0, #0
mov r1, #1
mov r2, #2
b svc
I guess it's not inlined since it is naked, so I dropped the naked attribute and wrote like this.
int svc(int __no, ...)
{
register int no asm("r0") = __no;
register int ret asm("r0");
asm("svc 0" : "=r"(ret) : "r"(no));
return ret;
}
------------------ generated assembly ------------------
svc:
stmfd sp!, {r0, r1, r2, r3}
ldr r0, [sp]
add sp, sp, #16
svc 0
bx lr
f:
mov r0, #0 // missing instructions setting r1 and r2
svc 0
bx lr
Although I don't know why gcc adds some unnecessary stack operations, svc is good. The problem is that svc is not inlined properly, the variadic parameters were dropped.
Is there any svc primitive in gcc? If gcc does not have one, how do I write the right one?
Have a look at the syntax that is used in core_cmFunc.h which is supplied as part of the ARM CMSIS for the Cortex-M family. Here's an example that writes a value to the Priority Mask Register:
__attribute__ ((always_inline)) static inline void __set_PRIMASK(uint32_t priMask)
{
__ASM volatile ("MSR primask, %0"::"r" (priMask));
}
However, creating a variadic function like this sounds difficult.
You can use a macro like this.
#define __svc(sNum) __asm volatile("SVC %0" ::"M" (sNum))
And use it just like any compiler-primitive function, __svc(2);.
Since it is just a macro, it will only generate the provided instruction.

Resources