Working of __asm__ __volatile__ ("" : : : "memory") - c

What basically __asm__ __volatile__ () does and what is significance of "memory" for ARM architecture?

asm volatile("" ::: "memory");
creates a compiler level memory barrier forcing optimizer to not re-order memory accesses across the barrier.
For example, if you need to access some address in a specific order (probably because that memory area is actually backed by a different device rather than a memory) you need to be able tell this to the compiler otherwise it may just optimize your steps for the sake of efficiency.
Assume in this scenario you must increment a value in address, read something and increment another value in an adjacent address.
int c(int *d, int *e) {
int r;
d[0] += 1;
r = e[0];
d[1] += 1;
return r;
}
Problem is compiler (gcc in this case) can rearrange your memory access to get better performance if you ask for it (-O). Probably leading to a sequence of instructions like below:
00000000 <c>:
0: 4603 mov r3, r0
2: c805 ldmia r0, {r0, r2}
4: 3001 adds r0, #1
6: 3201 adds r2, #1
8: 6018 str r0, [r3, #0]
a: 6808 ldr r0, [r1, #0]
c: 605a str r2, [r3, #4]
e: 4770 bx lr
Above values for d[0] and d[1] are loaded at the same time. Lets assume this is something you want to avoid then you need to tell compiler not to reorder memory accesses and that is to use asm volatile("" ::: "memory").
int c(int *d, int *e) {
int r;
d[0] += 1;
r = e[0];
asm volatile("" ::: "memory");
d[1] += 1;
return r;
}
So you'll get your instruction sequence as you want it to be:
00000000 <c>:
0: 6802 ldr r2, [r0, #0]
2: 4603 mov r3, r0
4: 3201 adds r2, #1
6: 6002 str r2, [r0, #0]
8: 6808 ldr r0, [r1, #0]
a: 685a ldr r2, [r3, #4]
c: 3201 adds r2, #1
e: 605a str r2, [r3, #4]
10: 4770 bx lr
12: bf00 nop
It should be noted that this is only compile time memory barrier to avoid compiler to reorder memory accesses, as it puts no extra hardware level instructions to flush memories or wait for load or stores to be completed. CPUs can still reorder memory accesses if they have the architectural capabilities and memory addresses are on normal type instead of strongly ordered or device (ref).

This sequence is a compiler memory access scheduling barrier, as noted in the article referenced by Udo. This one is GCC specific - other compilers have other ways of describing them, some of them with more explicit (and less esoteric) statements.
__asm__ is a gcc extension of permitting assembly language statements to be entered nested within your C code - used here for its property of being able to specify side effects that prevent the compiler from performing certain types of optimisations (which in this case might end up generating incorrect code).
__volatile__ is required to ensure that the asm statement itself is not reordered with any other volatile accesses any (a guarantee in the C language).
memory is an instruction to GCC that (sort of) says that the inline asm sequence has side effects on global memory, and hence not just effects on local variables need to be taken into account.

The meaning is explained here:
http://en.wikipedia.org/wiki/Memory_ordering
Basically it implies that the assembly code will be executed where you expect it. It tells the compiler to not reorder instructions around it. That is what is coded before this piece of code will be executed before and what is coded after will be executed after.

static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags;
asm volatile(
" mrs %0, cpsr # arch_local_irq_save\n"
" cpsid i" //disabled irq
: "=r" (flags) : : "memory", "cc");
return flags;
}

Related

ARM Thumb GCC Disassembled C. Caller-saved registers not saved and loading and storing same register immediately

Context: STM32F469 Cortex-M4 (ARMv7-M Thumb-2), Win 10, GCC, STM32CubeIDE; Learning/Trying out inline assembly & reading disassembly, stack managements etc., writing to core registers, observing contents of registers, examining RAM around stack pointer to understand how things work.
I've noticed that at some point, when I call a function, in the beginning of a called function, which received an argument, the instructions generated for the C function do "store R3 at RAM address X" followed immediately "Read RAM address X and store in RAM". So it's writing and reading the same value back, R3 is not changed. If it only had wanted to save the value of R3 onto the stack, why load it back then?
C code, caller function (main), my code:
asm volatile(" LDR R0,=#0x00000000\n"
" LDR R1,=#0x11111111\n"
" LDR R2,=#0x22222222\n"
" LDR R3,=#0x33333333\n"
" LDR R4,=#0x44444444\n"
" LDR R5,=#0x55555555\n"
" LDR R6,=#0x66666666\n"
" MOV R7,R7\n" //Stack pointer value is here, used for stack data access
" LDR R8,=#0x88888888\n"
" LDR R9,=#0x99999999\n"
" LDR R10,=#0xAAAAAAAA\n"
" LDR R11,=#0xBBBBBBBB\n"
" LDR R12,=#0xCCCCCCCC\n"
);
testInt = addFifteen(testInt); //testInt=0x03; returns uint8_t, argument uint8_t
Function call generates instructions to load function argument into R3, then move it to R0, then branch with link to addFifteen. So by the time I enter addFifteen, R0 and R3 have value 0x03 (testInt). So far so good. Here is what function call looks like:
testInt = addFifteen(testInt);
08000272: ldrb r3, [r7, #11]
08000274: mov r0, r3
08000276: bl 0x80001f0 <addFifteen>
So I go into addFifteen, my C code for addFifteen:
uint8_t addFifteen(uint8_t input){
return (input + 15U);
}
and its disassembly:
addFifteen:
080001f0: push {r7}
080001f2: sub sp, #12
080001f4: add r7, sp, #0
080001f6: mov r3, r0
080001f8: strb r3, [r7, #7]
080001fa: ldrb r3, [r7, #7]
080001fc: adds r3, #15
080001fe: uxtb r3, r3
08000200: mov r0, r3
08000202: adds r7, #12
08000204: mov sp, r7
08000206: ldr.w r7, [sp], #4
0800020a: bx lr
My primary interest is in 1f8 and 1fa lines. It stored R3 on stack and then loads freshly written value back into the register that still holds the value anyway.
Questions are:
What is the purpose of this "store register A into RAM X, next read value from RAM X into register A"? Read instruction doesn't seem to serve any purpose. Make sure RAM write is complete?
Push{r7} instruction makes stack 4-byte aligned instead of 8-byte aligned. But immediately after that instruction we have SP decremented by 12 (bytes), so it becomes 8-byte aligned again. Therefore, this behavior is ok. Is this statement correct? What if an interrupt happens between these two instructions? Will alignment be fixed during ISR stacking for the duration of ISR?
From what I read about caller/callee saved registers (very hard to find any sort of well-organized information on that, if you have good material, please, share a link), at least R0-R3 must be placed on stack when I call a function. However, it's easy to notice in this case that NONE of the registers were pushed on stack, and I verified it by checking memory around stack pointer, it would have been easy to notice 0x11111111 and 0x22222222, but they aren't there, and nothing is pushing them there. The values in R0 and R3 that I had before I called the function are simply gone forever. Why weren't any registers pushed on stack before function call? I would expect to have R3 0x33333333 when addFifteen returns because that's how it was before function call, but that value is casually overwritten even before branch to addFifteen. Why didn't GCC generate instructions to push R0-R3 onto the stack and only after that branch with link to addFifteen?
If you need some compiler settings, please, let me know where to find them in Eclipse (STM32CubeIDE) and what exactly you need there, I will happily provide them and add them to the question here.
uint8_t addFifteen(uint8_t input){
return (input + 15U);
}
What you are looking at here is unoptimized and at least with gnu the input and local variables get a memory location on the stack.
00000000 <addFifteen>:
0: b480 push {r7}
2: b083 sub sp, #12
4: af00 add r7, sp, #0
6: 4603 mov r3, r0
8: 71fb strb r3, [r7, #7]
a: 79fb ldrb r3, [r7, #7]
c: 330f adds r3, #15
e: b2db uxtb r3, r3
10: 4618 mov r0, r3
12: 370c adds r7, #12
14: 46bd mov sp, r7
16: bc80 pop {r7}
18: 4770 bx lr
What you see with r3 is that the input variable, input, comes in r0. For some reason, code is not optimized, it goes into r3, then it is saved in its memory location on the stack.
Setup the stack
00000000 <addFifteen>:
0: b480 push {r7}
2: b083 sub sp, #12
4: af00 add r7, sp, #0
save input to the stack
6: 4603 mov r3, r0
8: 71fb strb r3, [r7, #7]
so now we can start implementing the code in the function which wants to do math on the input function, so do that math
a: 79fb ldrb r3, [r7, #7]
c: 330f adds r3, #15
Convert the result to an unsigned char.
e: b2db uxtb r3, r3
Now prepare the return value
10: 4618 mov r0, r3
and clean up and return
12: 370c adds r7, #12
14: 46bd mov sp, r7
16: bc80 pop {r7}
18: 4770 bx lr
Now if I tell it not to use a frame pointer (just a waste of a register).
00000000 <addFifteen>:
0: b082 sub sp, #8
2: 4603 mov r3, r0
4: f88d 3007 strb.w r3, [sp, #7]
8: f89d 3007 ldrb.w r3, [sp, #7]
c: 330f adds r3, #15
e: b2db uxtb r3, r3
10: 4618 mov r0, r3
12: b002 add sp, #8
14: 4770 bx lr
And you can still see each of the fundamental steps in implementing the function. Unoptimized.
Now if you optimize
00000000 <addFifteen>:
0: 300f adds r0, #15
2: b2c0 uxtb r0, r0
4: 4770 bx lr
It removes all the excess.
number two.
Yes I agree this looks wrong, but gnu certainly does not keep the stack on an alignment at all times, so this looks wrong. But I have not read the details on the arm calling convention. Nor have I read to see what gcc's interpretation is. Granted they may claim a spec, but at the end of the day the compiler authors choose the calling convention for their compiler, they are under no obligation to arm or intel or others to conform to any spec. Their choice, and like the C language itself, there are lots of places where it is implementation defined and gnu implements the C language one way and others another way. Perhaps this is the same. Same goes for this saving of the incoming variable to the stack. We will see that llvm/clang does not.
number three.
r0-r3 and another register or two may be called caller saved, but the better way to think of them is volatile. The callee is free to modify them without saving them. It is not so much a case of saving the r0 register, but instead r0 represents a variable and you are managing that variable in functionally implementing the high level code.
For example
unsigned int fun1 ( void );
unsigned int fun0 ( unsigned int x )
{
return(fun1()+x);
}
00000000 <fun0>:
0: b510 push {r4, lr}
2: 4604 mov r4, r0
4: f7ff fffe bl 0 <fun1>
8: 4420 add r0, r4
a: bd10 pop {r4, pc}
x comes in in r0, and we need to preserve that value until after fun1() is called. r0 can be destroyed/modified by fun1(). So in this case they save r4, not r0, and keep x in r4.
clang does this as well
00000000 <fun0>:
0: b5d0 push {r4, r6, r7, lr}
2: af02 add r7, sp, #8
4: 4604 mov r4, r0
6: f7ff fffe bl 0 <fun1>
a: 1900 adds r0, r0, r4
c: bdd0 pop {r4, r6, r7, pc}
Back to your function.
clang, unoptimized also keeps the input variable in memory (stack).
00000000 <addFifteen>:
0: b081 sub sp, #4
2: f88d 0003 strb.w r0, [sp, #3]
6: f89d 0003 ldrb.w r0, [sp, #3]
a: 300f adds r0, #15
c: b2c0 uxtb r0, r0
e: b001 add sp, #4
10: 4770 bx lr
and you can see the same steps, prep the stack, store the input variable. Take the input variable do the math. Prepare the return value. Clean up, return.
Clang/llvm optimized:
00000000 <addFifteen>:
0: 300f adds r0, #15
2: b2c0 uxtb r0, r0
4: 4770 bx lr
Happens to be the same as gnu. Not expected that any two different compilers generate the same code, nor any expectation that any two versions of the same compiler generate the same code.
unoptimized, the input and local variables (none in this case) get a home on the stack. So what you are seeing is the input variable being put in its home on the stack as part of the setup of the function. Then the function itself wants to operate on that variable so, unoptimized, it needs to fetch that value from memory to create an intermediate variable (that in this case did not get a home on the stack) and so on. You see this with volatile variables as well. They will get written to memory then read back then modified then written to memory and read back, etc...
yes I agree, but I have not read the specs. End of the day it is gcc's calling convention or interpretation of some spec they choose to use. They have been doing this (not being aligned 100% of the time) for a long time and it does not fail. For all called functions they are aligned when the functions are called. Interrupts in arm code generated by gcc is not aligned all the time. Been this way since they adopted that spec.
by definition r0-r3, etc are volatile. The callee can modify them at will. The callee only needs to save/preserve them if IT needs them. In both the unoptimized and optimized cases only r0 matters for your function it is the input variable and it is used for the return value. You saw in the function I created that the input variable was preserved for later, even when optimized. But, by definition, the caller assumes these registers are destroyed by called functions, and called functions can destroy the contents of these registers and no need to save them.
As far as inline assembly goes, which is a different assembly language than "real" assembly language. I think you have a ways to go before being ready for that, but maybe not. After decades of constant bare metal work I have found zero real use cases for inline assembly, the cases I see are laziness avoiding allowing real assembly into the make system or ways to avoid writing real assembly language. I see it as a ghee whiz feature that folks use like unions and bitfields.
Within gnu, for arm, you have at least four incompatible assembly languages for arm. The not unified syntax real assembly, the unified syntax real assembly. The assembly language that you see when you use gcc to assemble instead of as and then inline assembly for gcc. Despite claims of compatibility clang arm assembly language is not 100% compatible with gnu assembly language and llvm/clang does not have a separate assembler you feed it to the compiler. Arms various toolchains over the years have completely incompatible assembly language to gnu for arm. This is all expected and normal. Assembly language is specific to the tool not the target.
Before you can get into inline assembly language learn some of the real assembly language. And to be fair perhaps you do, and perhaps quite well, and this question is about the discover of how compilers generate code, and how strange it looks as you find out that it is not some one to one thing (all tools in all cases generate the same output from the same input).
For inline asm, while you can specify registers, depending on what you are doing, you generally want to let the compiler choose the register, most of the work for inline assembly is not the assembly but the language that specific compiler uses to interface it...which is compiler specific, move to another compiler and the expectation is a whole new language to learn. While moving between assemblers is also a whole new language at least the syntax of the instructions themselves tend to be the same and the language differences are in everything else, labels and directives and such. And if lucky and it is a toolchain not just an assembler, you can look at the output of the compiler to start to understand the language and compare it to any documentation you can find. Gnus documentation is pretty bad in this case, so a lot of reverse engineering is needed. At the same time you are more likely to be successful with gnu tools over any other, not because they are better, in many cases they are not, but because of the sheer user base and the common features across targets and over decades of history.
I would get really good at interfacing asm with C by creating mock C functions to see which registers are used, etc. And/or even better, implement it in C, compile it, then hand modify/improve/whatever the output of the compiler (you do not need to be a guru to beat the compiler, to be as consistent, perhaps, but fairly often you can easily see improvements that can be made on the output of gcc, and gcc has been getting worse over the last several versions it is not getting better, as you can see from time to time on this site). Get strong in the asm for this toolchain and target and how the compiler works, and then perhaps learn the gnu inline assembly language.
I'm not sure there is a specific purpose to do it. it is just one solution that the compiler has found to do it.
For example the code:
unsigned int f(unsigned int a)
{
return sqrt(a + 1);
}
compiles with ARM GCC 9 NONE with optimisation level -O0 to:
push {r7, lr}
sub sp, sp, #8
add r7, sp, #0
str r0, [r7, #4]
ldr r3, [r7, #4]
adds r3, r3, #1
mov r0, r3
bl __aeabi_ui2d
mov r2, r0
mov r3, r1
mov r0, r2
mov r1, r3
bl sqrt
...
and in level -O1 to:
push {r3, lr}
adds r0, r0, #1
bl __aeabi_ui2d
bl sqrt
...
As you can see the asm is much easier to understand in -O1: store parameter in R0, add 1, call functions.
The hardware supports non aligned stack during exception. See here
The "caller saved" registers do not necessarily need to be stored on the stack, it's up to the caller to know whether it needs to store them or not.
Here you are mixing (if I understood correctly) C and assembly: so you have to do the compiler job before switching back to C: either you store values in callee saved registers (and then you know by convention that the compiler will store them during function call) or you store them yourself on the stack.

Cortex M3, STM32, thumb2: My inc and dec operations are not atomic, but should be. What's wrong here?

I need a thread save idx++ and idx-- operation.
Disabling interrupts, i.e. use critical sections, is one thing, but I want
to understand why my operations are not atomic, as I expect ?
Here is the C-code with inline assembler code shown, using segger ozone:
(Also please notice, the address of the variables show that the 32 bit variable is 32-bit-aligned in memory, and the 8- and 16-bit variables are both 16 bit aligned)
volatile static U8 dbgIdx8 = 1000U;
volatile static U16 dbgIdx16 = 1000U;
volatile static U32 dbgIdx32 = 1000U;
dbgIdx8 ++;
080058BE LDR R3, [PC, #48]
080058C0 LDRB R3, [R3]
080058C2 UXTB R3, R3
080058C4 ADDS R3, #1
080058C6 UXTB R2, R3
080058C8 LDR R3, [PC, #36]
080058CA STRB R2, [R3]
dbgIdx16 ++;
080058CC LDR R3, [PC, #36]
080058CE LDRH R3, [R3]
080058D0 UXTH R3, R3
080058D2 ADDS R3, #1
080058D4 UXTH R2, R3
080058D6 LDR R3, [PC, #28]
080058D8 STRH R2, [R3]
dbgIdx32 ++;
080058DA LDR R3, [PC, #28]
080058DC LDR R3, [R3]
080058DE ADDS R3, #1
080058E0 LDR R2, [PC, #20]
080058E2 STR R3, [R2]
There is no guarantee that ++ and -- are atomic. If you need guaranteed atomicity, you will have to find some other way.
As #StaceyGirl points out in a comment, you might be able to use the facilities of <stdatomic.h>. For example, I see there's an atomic atomic_fetch_add function defined, which acts like the postfix ++ you're striving for. There's an atomic_fetch_sub, too.
Alternatively, you might have some compiler intrinsics available to you for performing an atomic increment in some processor-specific way.
ARM cortex cores do not modify memory. All memory modifications are performed as RMW (read-modify-write) operations which are not atomic by default.
But Cortex M3 has special instructions to lock access to the memory location. LDREX & STREX. https://developer.arm.com/documentation/100235/0004/the-cortex-m33-instruction-set/memory-access-instructions/ldaex-and-stlex
You can use them directly in the C code without touching the assembly by using intrinsic.
Do not use not 32 bits data types in any performance (you want to lock for as short as possible time) sensitive programs. Most shorter data types operations add some additional code.

Does using the `__irq` specified for a function pointer declaration do anything?

I have to do a bit of embedded programming for a project and am learning by looking at some other projects. I found the following code that declares the vector table:
typedef void (*const vect_t)(void) __irq;
vect_t vector_table[]
__attribute__ ((section("vectors"))) = {
(vect_t) (RAM_BASE + RAM_SIZE),
(vect_t) Reset_Handler,
// ...
};
The reset handler is declared as follows:
void Reset_Handler(void) {
// ... no interesting
}
I read up on __irq and the ARM compiler docs state the following:
The compiler generates function entry and exit sequences suitable for
use in an interrupt handler when this attribute is present.
I'm guessing that vect_t is supposed to be a pointer to void functions that take no arguments, that are suitable to be used as interrupt handlers. This seems strange to me, as __irq should just be a compiler hint for the implementation, but not something that contributes to the type of a function (like arguments or return type do).
My assumption is that __irq should have been used on Reset_Handler (and on all other interrupt handlers) and not in the type definition. Is this correct?
Please note that I am not asking what __irq does. I understand that this is not part of the C standard and that it is an ARM compiler extension. I also understand that the code that is produced when using it depends on the CPU architecture.
Generally speaking, interrupt service routines (ISR) use different instructions for returning. A normal function just uses a "return from subroutine" instruction which pops the stack according to the calling convention. ISRs are however not called by the program but by hardware, so they often have a different calling convention. In order to generate these special instructions correctly, you need some non-standard interrupt syntax.
The code is an interrupt vector table, so the type definition is correct. However, in case the ISR is declared as a plain function without any special keywords void Reset_Handler(void), then this won't work. The incorrect cast here (vect_t) Reset_Handler will ensure that this function is called upon interrupt, but it will not return from that function correctly - likely crashing.
My assumption is that __irq should have been used on Reset_Handler (and on all other interrupt handlers) and not in the type definition. Is this correct?
It should be in the vector table and in the ISR function definition both.
Using gcc for example (attributes/directives/pragmas etc are specific to a tool not to the C language)
struct interrupt_frame;
__attribute__ ((interrupt))
void x (struct interrupt_frame *frame)
{
}
void y ( void )
{
}
Using a generic aarch32 type arm target:
Disassembly of section .text:
00000000 <x>:
0: e25ef004 subs pc, lr, #4
00000004 <y>:
4: e12fff1e bx lr
Now let's complicate this further
struct interrupt_frame;
unsigned int k;
__attribute__ ((interrupt))
void x (struct interrupt_frame *frame)
{
k=5;
}
void y ( void )
{
k=5;
}
00000000 <x>:
0: e92d000c push {r2, r3}
4: e3a02005 mov r2, #5
8: e59f3008 ldr r3, [pc, #8] ; 18 <x+0x18>
c: e5832000 str r2, [r3]
10: e8bd000c pop {r2, r3}
14: e25ef004 subs pc, lr, #4
0000001c <y>:
1c: e3a02005 mov r2, #5
20: e59f3004 ldr r3, [pc, #4] ; 2c <y+0x10>
24: e5832000 str r2, [r3]
28: e12fff1e bx lr
For an interrupt you need to preserve all the registers in an interrupt, for a regular function the calling convention dictates which registers are volatile within the function. So with this example you can see the primary reason for the directive, preserve the state and use the specific return from interrupt instruction.
Because the cortex-m architectures (armv6-m, 7-m and 8-m) were designed so that you could put C functions directly in the vector table without any wrapping of asm around them (the hardware takes care of both preserving state and the special return issues). The compiler generates code the same way, basically the attribute has no effect on that target:
00000000 <x>:
0: 2205 movs r2, #5
2: 4b01 ldr r3, [pc, #4] ; (8 <x+0x8>)
4: 601a str r2, [r3, #0]
6: 4770 bx lr
0000000c <y>:
c: 2205 movs r2, #5
e: 4b01 ldr r3, [pc, #4] ; (14 <y+0x8>)
10: 601a str r2, [r3, #0]
12: 4770 bx lr
And the last note is that you do not return from the reset vector so there is no reason for cortex-m to even bother with an attribute/directive like this for the reset vector. Well no architecture should you return from the reset vector if it is truly a bare-metal vector table (vs using the same scheme for general application entry sitting on an os, not-bare-metal) (or a bootloader calling this code you can certainly return).
Other architectures do not tend to lump reset in the list of "interrupts" or "exceptions" reset is reset, ARM docs and code tend to think of them as any other exception and as a result you have to still think of it differently.

Conversion from uint64_t to double

For an STM32F7, which includes instructions for double floating points, I want to convert an uint64_t to double.
In order to test that, I used the following code:
volatile static uint64_t m_testU64 = 45uLL * 0xFFFFFFFFuLL;
volatile static double m_testD;
#ifndef DO_NOT_USE_UL2D
m_testD = (double)m_testU64;
#else
double t = (double)(uint32_t)(m_testU64 >> 32u);
t *= 4294967296.0;
t += (double)(uint32_t)(m_testU64 & 0xFFFFFFFFu);
m_testD = t;
#endif
By default (if DO_NOT_USE_UL2D is not defined) the compiler (gcc or clang) is calling the function: __aeabi_ul2d() which is kind of complex in number of executed instruction. See the assembly code here : https://github.com/gcc-mirror/gcc/blob/master/libgcc/config/arm/ieee754-df.S#L537
For my particular example, it takes 20 instructions without entering in most of the branches
And if DO_NOT_USE_UL2D is defined, the compiler generate the following assembly code:
movw r0, #1728 ; 0x6c0
vldr d2, [pc, #112] ; 0x303fa0
movt r0, #8192 ; 0x2000
vldr s0, [r0, #4]
ldr r1, [r0, #0]
vcvt.f64.u32 d0, s0
vldr s2, [r0]
vcvt.f64.u32 d1, s2
ldr r1, [r0, #4]
vfma.f64 d1, d0, d2
vstr d1, [r0, #8]
The code is simpler, and it is only 10 instructions.
So here the the questions (if DO_NOT_USE_UL2D is defined):
Is my code (in C) correct?
Is my code slower than the __aeabi_ul2d() function (not really important, but a bit curious)?
I have to do that, since I am not allowed to use function from libgcc (There are very good reasons for that...)
Be aware that the main purpure of this question is not about performance, I am really curious about the implementation in libgcc, and I really want to know if there is something wrong in my code.

Can an x86_64 and/or armv7-m mov instruction be interrupted mid-operation?

I am wondering whenever I would need to use a atomic type or volatile (or nothing special) for a interrupt counter:
uint32_t uptime = 0;
// interrupt each 1 ms
ISR()
{
// this is the only location which writes to uptime
++uptime;
}
void some_func()
{
uint32_t now = uptime;
}
I myself would think that volatile should be enough and guarantee error-free operation and consistency (incremental value until overflow).
But it has come to my mind that maybe a mov instruction could be interrupted mid-operation when moving/setting individual bits, is that possible on x86_64 and/or armv7-m?
for example the mov instruction would begin to execute, set 16 bits, then would be pre-empted, the ISR would run increasing uptime by one (and maybe changing all bits) and then the mov instruction would be continued. I cannot find any material that could assure me of the working order.
Would this also be the same on armv7-m?
Would using sig_atomic_t be the correct solution to always have an error-free and consistent result or would it be "overkill"?
For example the ARM7-M architecture specifies:
In ARMv7-M, the single-copy atomic processor accesses are:
• All byte accesses.
• All halfword accesses to halfword-aligned locations.
• All word accesses to word-aligned locations.
would a assert with &uptime % 8 == 0 be sufficient to guarantee this?
Use volatile. You compiler does not know about interrupts. It may assume, that ISR() function is never called (do you have in your code anywhere a call to ISR?). That means that uptime will never increment, that means that uptime will always be zero, that means that uint32_t now = uptime; may be safely optimized to uint32_t now = 0;. Use volatile uint32_t uptime. That way the optimizer will not optimize uptime away.
Word size. uint32_t variable has 4bytes. So on 32-bit processor it will take 1 instruction to fetch it's value, but on 8-bit processor it will take at least 4 instructions (in general). So on 32-bit processor you don't need to disable interrupt before loading the value of uptime, because interrupt routine will start executing before or after the current instruction is executed on the processor. Processor can't branch to interrupt routing mid-instruction, that's not possible. On 8-bit processor we need to disable interrupts before reading from uptime, like:
DisableInterrupts();
uint32_t now = uptime;
EnableInterrupts();
C11 atomic types. I have never seen a real embedded code which uses them, still waiting, I see volatile everywhere. This is dependent on your compiler, because the compiler implements atomic types and atomic_* functions. This is compiler dependent. Are 100% sure that when reading from atomic_t variable your compiler will disable ISR() interrupt? Inspect the assembly output generated from atomic_* calls, you will know for sure. This was a good read. I would expect atomic* C11 types to work for concurrency between multiple threads, which can switch execution context anytime. Using it between interrupt and normal context may block your cpu, because once you are in IRQ you get back to normal execution only after servicing that IRQ, ie. some_func sets mutex up to read uptime, then IRQ fires up and IRQ will check in a loop if mutex is down, this will result in endless loop.
See for example HAL_GetTick() implementation, from here, removed __weak macro and substituted __IO macro by volatile, those macros are defined in cmsis file:
static volatile uint32_t uwTick;
void HAL_IncTick(void)
{
uwTick++;
}
uint32_t HAL_GetTick(void)
{
return uwTick;
}
Typically HAL_IncTick() is called from systick interrupt each 1ms.
You have to read the documentation for each separate core and/or chip. x86 is a completely separate thing from ARM, and within both families each instance may vary from any other instance, can be and should expect to be completely new designs each time. Might not be but from time to time are.
Things to watch out for as noted in the comments.
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
++uptime;
}
void some_func ( void )
{
uint32_t now = uptime;
}
On my machine with the tool I am using today:
Disassembly of section .text:
00000000 <ISR>:
0: e59f200c ldr r2, [pc, #12] ; 14 <ISR+0x14>
4: e5923000 ldr r3, [r2]
8: e2833001 add r3, r3, #1
c: e5823000 str r3, [r2]
10: e12fff1e bx lr
14: 00000000 andeq r0, r0, r0
00000018 <some_func>:
18: e12fff1e bx lr
Disassembly of section .bss:
00000000 <uptime>:
0: 00000000 andeq r0, r0, r0
this could vary, but if you find a tool on one machine one day that builds a problem then you can assume it is a problem. So far we are actually okay. because some_func is dead code the read is optimized out.
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
++uptime;
}
uint32_t some_func ( void )
{
uint32_t now = uptime;
return(now);
}
fixed
00000000 <ISR>:
0: e59f200c ldr r2, [pc, #12] ; 14 <ISR+0x14>
4: e5923000 ldr r3, [r2]
8: e2833001 add r3, r3, #1
c: e5823000 str r3, [r2]
10: e12fff1e bx lr
14: 00000000 andeq r0, r0, r0
00000018 <some_func>:
18: e59f3004 ldr r3, [pc, #4] ; 24 <some_func+0xc>
1c: e5930000 ldr r0, [r3]
20: e12fff1e bx lr
24: 00000000 andeq r0, r0, r0
Because of cores like mips and arm tending to have data aborts by default for unaligned accesses we might assume the tool will not generate an unaligned address for such a clean definition. But if we were to talk about packed structs, that is another story you told the compiler to generate an unaligned access and it will...If you want to feel safe remember a "word" in ARM is 32 bits so you can assert address of variable AND 3.
x86 one would also assume a clean definition like that would result in an aligned variable, but x86 doesnt have the data fault issue by default and as a result compilers are a bit more free...focusing on arm as I think that is your question.
Now if I do this:
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
if(uptime)
{
uptime=uptime+1;
}
else
{
uptime=uptime+5;
}
}
uint32_t some_func ( void )
{
uint32_t now = uptime;
return(now);
}
00000000 <ISR>:
0: e59f2014 ldr r2, [pc, #20] ; 1c <ISR+0x1c>
4: e5923000 ldr r3, [r2]
8: e3530000 cmp r3, #0
c: 03a03005 moveq r3, #5
10: 12833001 addne r3, r3, #1
14: e5823000 str r3, [r2]
18: e12fff1e bx lr
1c: 00000000 andeq r0, r0, r0
and adding volatile
00000000 <ISR>:
0: e59f3018 ldr r3, [pc, #24] ; 20 <ISR+0x20>
4: e5932000 ldr r2, [r3]
8: e3520000 cmp r2, #0
c: e5932000 ldr r2, [r3]
10: 12822001 addne r2, r2, #1
14: 02822005 addeq r2, r2, #5
18: e5832000 str r2, [r3]
1c: e12fff1e bx lr
20: 00000000 andeq r0, r0, r0
the two reads results in two reads. now there is a problem here if the read-modify-write can get interrupted, but we assume since this is an ISR it cant? If you were to read a 7, add a 1 then write an 8 if you were interrupted after the read by something that is also modifying uptime, that modification has limited life, its modification happens, say a 5 is written, then this ISR writes an 8 on top if it.
if a read-modify-write were in the interruptable code then the isr could get in there and it probably wouldnt work the way you wanted. This is two readers two writers you want one responsible for writing a shared resource and the others read-only. Otherwise you need a lot more work not built into the language.
Note on an arm machine:
typedef int __sig_atomic_t;
...
typedef __sig_atomic_t sig_atomic_t;
so
typedef unsigned int uint32_t;
typedef int sig_atomic_t;
volatile sig_atomic_t uptime = 0;
void ISR ( void )
{
if(uptime)
{
uptime=uptime+1;
}
else
{
uptime=uptime+5;
}
}
uint32_t some_func ( void )
{
uint32_t now = uptime;
return(now);
}
Isnt going to change the result. At least not on that system with that define, need to examine other C libraries and/or sandbox headers to see what they define, or if you are not careful (happens often) the wrong headers are used, the x6_64 headers are used to build arm programs with the cross compiler. seen gcc and llvm make host vs target mistakes.
going back to a concern though which based on your comments you appear to already understand
typedef unsigned int uint32_t;
uint32_t uptime = 0;
void ISR ( void )
{
if(uptime)
{
uptime=uptime+1;
}
else
{
uptime=uptime+5;
}
}
void some_func ( void )
{
while(uptime&1) continue;
}
This was pointed out in the comments even though you have one writer and one reader
00000020 <some_func>:
20: e59f3018 ldr r3, [pc, #24] ; 40 <some_func+0x20>
24: e5933000 ldr r3, [r3]
28: e2033001 and r3, r3, #1
2c: e3530000 cmp r3, #0
30: 012fff1e bxeq lr
34: e3530000 cmp r3, #0
38: 012fff1e bxeq lr
3c: eafffffa b 2c <some_func+0xc>
40: 00000000 andeq r0, r0, r0
It never goes back to read the variable from memory, and unless someone corrupts the register in an event handler, this can be an infinite loop.
make uptime volatile:
00000024 <some_func>:
24: e59f200c ldr r2, [pc, #12] ; 38 <some_func+0x14>
28: e5923000 ldr r3, [r2]
2c: e3130001 tst r3, #1
30: 012fff1e bxeq lr
34: eafffffb b 28 <some_func+0x4>
38: 00000000 andeq r0, r0, r0
now the reader does a read every time.
same issue here, not in a loop, no volatile.
00000020 <some_func>:
20: e59f302c ldr r3, [pc, #44] ; 54 <some_func+0x34>
24: e5930000 ldr r0, [r3]
28: e3500005 cmp r0, #5
2c: 0a000004 beq 44 <some_func+0x24>
30: e3500004 cmp r0, #4
34: 0a000004 beq 4c <some_func+0x2c>
38: e3500001 cmp r0, #1
3c: 03a00006 moveq r0, #6
40: e12fff1e bx lr
44: e3a00003 mov r0, #3
48: e12fff1e bx lr
4c: e3a00007 mov r0, #7
50: e12fff1e bx lr
54: 00000000 andeq r0, r0, r0
uptime can have changed between tests. volatile fixes this.
so volatile is not the universal solution, having the variable be used for one way communication is ideal, need to communicate the other way use a separate variable, one writer one or more readers per.
you have done the right thing and consulted the documentation for your chip/core
So if aligned (in this case a 32 bit word) AND the compiler chooses the right instruction then the interrupt wont interrupt the transaction. If it is an LDM/STM though you should read the documentation (push and pop are also LDM/STM pseudo instructions) in some cores/architectures those can be interrupted and restarted as a result we are warned about those situations in arm documentation.
short answer, add volatile, and make it so there is only one writer per variable. and keep the variable aligned. (and read the docs each time you change chips/cores, and periodically disassemble to check the compiler is doing what you asked it to do). doesnt matter if it is the same core type (another cortex-m3) from the same vendor or different vendors or if it is some completely different core/chip (avr, msp430, pic, x86, mips, etc), start from zero, get the docs and read them, check the compiler output.
TL:DR: Use volatile if an aligned uint32_t is naturally atomic (it is on x86 and ARM). Why is integer assignment on a naturally aligned variable atomic on x86?. Your code will technically have C11 undefined behaviour, but real implementations will do what you want with volatile.
Or use C11 stdatomic.h with memory_order_relaxed if you want to tell the compiler exactly what you mean. It will compile to the same asm as volatile on x86 and ARM if you use it correctly.
(But if you actually need it to run efficiently on single-core CPUs where load/store of an aligned uint32_t isn't atomic "for free", e.g. with only 8-bit registers, you might rather disable interrupts instead of having stdatomic fall back to using a lock to serialize reads and writes of your counter.)
Whole instructions are always atomic with respect to interrupts on the same core, on all CPU architectures. Partially-completed instructions are either completed or discarded (without committing their stores) before servicing an interrupt.
For a single core, CPUs always preserve the illusion of running instructions one at a time, in program order. This includes interrupts only happening on the boundaries between instructions. See #supercat's single-core answer on Can num++ be atomic for 'int num'?. If the machine has 32-bit registers, you can safely assume that a volatile uint32_t will be loaded or stored with a single instruction. As #old_timer points out, beware of unaligned packed-struct members on ARM, but unless you manually do that with __attribute__((packed)) or something, the normal ABIs on x86 and ARM ensure natural alignment.
Multiple bus transactions from a single instruction for unaligned operands or narrow busses only matters for concurrent read+write, either from another core or a non-CPU hardware device. (e.g. if you're storing to device memory).
Some long-running x86 instructions like rep movs or vpgatherdd have well-defined ways to partially complete on exceptions or interrupts: update registers so re-running the instruction does the right thing. But other than that, an instruction has either run or it hasn't, even a "complex" instruction like a memory-destination add that does a read/modify/write.) IDK if anyone's ever proposed a CPU that could suspend/result multi-step instructions across interrupts instead of cancelling them, but x86 and ARM are definitely not like that. There are lots of weird ideas in computer-architecture research papers. But it seems unlikely that it would be worth the keeping all the necessary microarchitectural state to resume in the middle of a partially-executed instruction instead of just re-decoding it after returning from an interrupt.
This is why AVX2 / AVX512 gathers always need a gather mask even when you want to gather all the elements, and why they destroy the mask (so you have to reset it to all-ones again before the next gather).
In your case, you only need the store (and load outside the ISR) to be atomic. You don't need the whole ++uptime to be atomic. You can express this with C11 stdatomic like this:
#include <stdint.h>
#include <stdatomic.h>
_Atomic uint32_t uptime = 0;
// interrupt each 1 ms
void ISR()
{
// this is the only location which writes to uptime
uint32_t tmp = atomic_load_explicit(&uptime, memory_order_relaxed);
// the load doesn't even need to be atomic, but relaxed atomic is as cheap as volatile on machines with wide-enough loads
atomic_store_explicit(&uptime, tmp+1, memory_order_relaxed);
// some x86 compilers may fail to optimize to add dword [uptime],1
// but uptime+=1 would compile to LOCK ADD (an atomic increment), which you don't want.
}
// MODIFIED: return the load result
uint32_t some_func()
{
// this does need to be an atomic load
// you typically get that by default with volatile, too
uint32_t now = atomic_load_explicit(&uptime, memory_order_relaxed);
return now;
}
volatile uint32_t compiles to the exact same asm on x86 and ARM. I put the code on the Godbolt compiler explorer. This is what clang6.0 -O3 does for x86-64. (With -mtune=bdver2, it uses inc instead of add, but it knows that memory-destination inc is one of the few cases where inc is still worse than add on Intel :)
ISR: # #ISR
add dword ptr [rip + uptime], 1
ret
some_func: # #some_func
mov eax, dword ptr [rip + uptime]
ret
inc_volatile: // void func(){ volatile_var++; }
add dword ptr [rip + volatile_var], 1
ret
gcc uses separate load/store instructions for both volatile and _Atomic, unfortunately.
# gcc8.1 -O3
mov eax, DWORD PTR uptime[rip]
add eax, 1
mov DWORD PTR uptime[rip], eax
At least that means there's no downside to using _Atomic or volatile _Atomic on either gcc or clang.
Plain uint32_t without either qualifier is not a real option, at least not for the read side. You probably don't want the compiler to hoist get_time() out of a loop and use the same time for every iteration. In cases where you do want that, you could copy it to a local. That could result in extra work for no benefit if the compiler doesn't keep it in a register, though (e.g. across function calls it's easiest for the compiler to just reload from static storage). On ARM, though, copying to a local may actually help because then it can reference it relative to the stack pointer instead of needing to keep a static address in another register, or regenerate the address. (x86 can load from static addresses with a single large instruction, thanks to its variable-length instruction set.)
If you want any stronger memory-ordering, you can use atomic_signal_fence(memory_order_release); or whatever (signal_fence not thread_fence) to tell the compiler you only care about ordering wrt. code running asynchronously on the same CPU ("in the same thread" like a signal handler), so it will only have to block compile-time reordering, not emit any memory-barrier instructions like ARM dmb.
e.g. in the ISR:
uint32_t tmp = atomic_load_explicit(&idx, memory_order_relaxed);
tmp++;
shared_buf[tmp] = 2; // non-atomic
// Then do a release-store of the index
atomic_signal_fence(memory_order_release);
atomic_load_explicit(&idx, tmp, memory_order_relaxed);
Then it's safe for a reader to load idx, run atomic_signal_fence(memory_order_acquire);, and read from shared_buf[tmp] even if shared_buf is not _Atomic. (Assuming you took care of wraparound issues and so on.)
volatile is only sugestion for compiler, where value should be stored. typically with this flat this is stored in any CPU register. But if compiler will not take this space because it is busy for other operation, it will be ignored and traditionally stored in memory. this is the main rule.
then let's look at the architecture. all native CPU instruction with all native types are atomic. But many operation can be splited into two steps, when value should be copied from memory to memory. in that situation can be done some cpu interrupt. but don't worry, it is normal. when value will not be stored into prepared variable, you can understand this as not fully commited operation.
problem is when you use words longer than implemented in CPU, for example u32bit in 16 or 8 bit processor. In that situation reading and writting value will be splited into many steps. then it will be sure, then some part of value will be stored, other not, and you will get wrong damaged value.
in this scenario it is not allways good aproach for disabling interrupts, because this can take big time. of course you can use locking, but this can do the same.
but you can make some structure, with first field as data, and second field as counter that suit in architecture. then when you reading that value, you can at first get counter as first value, then get value, and at last get counter second time. when counter differs, you should repeat this process.
of course it doesn't guarantee all will be proper, but typically it saves a lot of cpu cycles. for example you will use 16bit additional counter for verification, it is 65536 values. then when you read this second counter first time, you main process must be frozen for very long cycles, in this example it should be 65536 missed interrupts, for making bug for main counter or any other stored value.
of course if you using 32bit value in 32bit architecture, it is not a problem, you don't need specially secure that operation, independed or architecture. of course except if architecture do all its operation as atomic :)
example code:
struct
{
ucint32_t value; //us important value
int watchdog; //for value secure, long platform depended, usually at least 32bits
} SecuredCounter;
ISR()
{
// this is the only location which writes to uptime
++SecuredCounter.value;
++SecuredCounter.watchdog;
}
void some_func()
{
uint32_t now = Read_uptime;
}
ucint32_t Read_uptime;
{
int secure1; //length platform dependee
ucint32_t value;
int secure2;
while (1) {
longint secure1=SecuredCounter.watchdog; //read first
ucint32_t value=SecuredCounter.value; //read value
longint secure2=SecuredCounter.watchdog; //read second, should be as first
if (secure1==secure2) return value; //this is copied and should be proper
};
};
Different approach is to make two identical counters, you should increase it both in single function. In read function you copy both values to local variables, and compare it is identical. If is, then value is proper and return single one. If differs, repeat reading. Don't worry, if values differs, then you reading function has been interrupted. It is very fiew chance, after repeated reading it will happen again. But if it will happen, it is no chance it will be stalled loop.

Resources