Related
Context: STM32F469 Cortex-M4 (ARMv7-M Thumb-2), Win 10, GCC, STM32CubeIDE; Learning/Trying out inline assembly & reading disassembly, stack managements etc., writing to core registers, observing contents of registers, examining RAM around stack pointer to understand how things work.
I've noticed that at some point, when I call a function, in the beginning of a called function, which received an argument, the instructions generated for the C function do "store R3 at RAM address X" followed immediately "Read RAM address X and store in RAM". So it's writing and reading the same value back, R3 is not changed. If it only had wanted to save the value of R3 onto the stack, why load it back then?
C code, caller function (main), my code:
asm volatile(" LDR R0,=#0x00000000\n"
" LDR R1,=#0x11111111\n"
" LDR R2,=#0x22222222\n"
" LDR R3,=#0x33333333\n"
" LDR R4,=#0x44444444\n"
" LDR R5,=#0x55555555\n"
" LDR R6,=#0x66666666\n"
" MOV R7,R7\n" //Stack pointer value is here, used for stack data access
" LDR R8,=#0x88888888\n"
" LDR R9,=#0x99999999\n"
" LDR R10,=#0xAAAAAAAA\n"
" LDR R11,=#0xBBBBBBBB\n"
" LDR R12,=#0xCCCCCCCC\n"
);
testInt = addFifteen(testInt); //testInt=0x03; returns uint8_t, argument uint8_t
Function call generates instructions to load function argument into R3, then move it to R0, then branch with link to addFifteen. So by the time I enter addFifteen, R0 and R3 have value 0x03 (testInt). So far so good. Here is what function call looks like:
testInt = addFifteen(testInt);
08000272: ldrb r3, [r7, #11]
08000274: mov r0, r3
08000276: bl 0x80001f0 <addFifteen>
So I go into addFifteen, my C code for addFifteen:
uint8_t addFifteen(uint8_t input){
return (input + 15U);
}
and its disassembly:
addFifteen:
080001f0: push {r7}
080001f2: sub sp, #12
080001f4: add r7, sp, #0
080001f6: mov r3, r0
080001f8: strb r3, [r7, #7]
080001fa: ldrb r3, [r7, #7]
080001fc: adds r3, #15
080001fe: uxtb r3, r3
08000200: mov r0, r3
08000202: adds r7, #12
08000204: mov sp, r7
08000206: ldr.w r7, [sp], #4
0800020a: bx lr
My primary interest is in 1f8 and 1fa lines. It stored R3 on stack and then loads freshly written value back into the register that still holds the value anyway.
Questions are:
What is the purpose of this "store register A into RAM X, next read value from RAM X into register A"? Read instruction doesn't seem to serve any purpose. Make sure RAM write is complete?
Push{r7} instruction makes stack 4-byte aligned instead of 8-byte aligned. But immediately after that instruction we have SP decremented by 12 (bytes), so it becomes 8-byte aligned again. Therefore, this behavior is ok. Is this statement correct? What if an interrupt happens between these two instructions? Will alignment be fixed during ISR stacking for the duration of ISR?
From what I read about caller/callee saved registers (very hard to find any sort of well-organized information on that, if you have good material, please, share a link), at least R0-R3 must be placed on stack when I call a function. However, it's easy to notice in this case that NONE of the registers were pushed on stack, and I verified it by checking memory around stack pointer, it would have been easy to notice 0x11111111 and 0x22222222, but they aren't there, and nothing is pushing them there. The values in R0 and R3 that I had before I called the function are simply gone forever. Why weren't any registers pushed on stack before function call? I would expect to have R3 0x33333333 when addFifteen returns because that's how it was before function call, but that value is casually overwritten even before branch to addFifteen. Why didn't GCC generate instructions to push R0-R3 onto the stack and only after that branch with link to addFifteen?
If you need some compiler settings, please, let me know where to find them in Eclipse (STM32CubeIDE) and what exactly you need there, I will happily provide them and add them to the question here.
uint8_t addFifteen(uint8_t input){
return (input + 15U);
}
What you are looking at here is unoptimized and at least with gnu the input and local variables get a memory location on the stack.
00000000 <addFifteen>:
0: b480 push {r7}
2: b083 sub sp, #12
4: af00 add r7, sp, #0
6: 4603 mov r3, r0
8: 71fb strb r3, [r7, #7]
a: 79fb ldrb r3, [r7, #7]
c: 330f adds r3, #15
e: b2db uxtb r3, r3
10: 4618 mov r0, r3
12: 370c adds r7, #12
14: 46bd mov sp, r7
16: bc80 pop {r7}
18: 4770 bx lr
What you see with r3 is that the input variable, input, comes in r0. For some reason, code is not optimized, it goes into r3, then it is saved in its memory location on the stack.
Setup the stack
00000000 <addFifteen>:
0: b480 push {r7}
2: b083 sub sp, #12
4: af00 add r7, sp, #0
save input to the stack
6: 4603 mov r3, r0
8: 71fb strb r3, [r7, #7]
so now we can start implementing the code in the function which wants to do math on the input function, so do that math
a: 79fb ldrb r3, [r7, #7]
c: 330f adds r3, #15
Convert the result to an unsigned char.
e: b2db uxtb r3, r3
Now prepare the return value
10: 4618 mov r0, r3
and clean up and return
12: 370c adds r7, #12
14: 46bd mov sp, r7
16: bc80 pop {r7}
18: 4770 bx lr
Now if I tell it not to use a frame pointer (just a waste of a register).
00000000 <addFifteen>:
0: b082 sub sp, #8
2: 4603 mov r3, r0
4: f88d 3007 strb.w r3, [sp, #7]
8: f89d 3007 ldrb.w r3, [sp, #7]
c: 330f adds r3, #15
e: b2db uxtb r3, r3
10: 4618 mov r0, r3
12: b002 add sp, #8
14: 4770 bx lr
And you can still see each of the fundamental steps in implementing the function. Unoptimized.
Now if you optimize
00000000 <addFifteen>:
0: 300f adds r0, #15
2: b2c0 uxtb r0, r0
4: 4770 bx lr
It removes all the excess.
number two.
Yes I agree this looks wrong, but gnu certainly does not keep the stack on an alignment at all times, so this looks wrong. But I have not read the details on the arm calling convention. Nor have I read to see what gcc's interpretation is. Granted they may claim a spec, but at the end of the day the compiler authors choose the calling convention for their compiler, they are under no obligation to arm or intel or others to conform to any spec. Their choice, and like the C language itself, there are lots of places where it is implementation defined and gnu implements the C language one way and others another way. Perhaps this is the same. Same goes for this saving of the incoming variable to the stack. We will see that llvm/clang does not.
number three.
r0-r3 and another register or two may be called caller saved, but the better way to think of them is volatile. The callee is free to modify them without saving them. It is not so much a case of saving the r0 register, but instead r0 represents a variable and you are managing that variable in functionally implementing the high level code.
For example
unsigned int fun1 ( void );
unsigned int fun0 ( unsigned int x )
{
return(fun1()+x);
}
00000000 <fun0>:
0: b510 push {r4, lr}
2: 4604 mov r4, r0
4: f7ff fffe bl 0 <fun1>
8: 4420 add r0, r4
a: bd10 pop {r4, pc}
x comes in in r0, and we need to preserve that value until after fun1() is called. r0 can be destroyed/modified by fun1(). So in this case they save r4, not r0, and keep x in r4.
clang does this as well
00000000 <fun0>:
0: b5d0 push {r4, r6, r7, lr}
2: af02 add r7, sp, #8
4: 4604 mov r4, r0
6: f7ff fffe bl 0 <fun1>
a: 1900 adds r0, r0, r4
c: bdd0 pop {r4, r6, r7, pc}
Back to your function.
clang, unoptimized also keeps the input variable in memory (stack).
00000000 <addFifteen>:
0: b081 sub sp, #4
2: f88d 0003 strb.w r0, [sp, #3]
6: f89d 0003 ldrb.w r0, [sp, #3]
a: 300f adds r0, #15
c: b2c0 uxtb r0, r0
e: b001 add sp, #4
10: 4770 bx lr
and you can see the same steps, prep the stack, store the input variable. Take the input variable do the math. Prepare the return value. Clean up, return.
Clang/llvm optimized:
00000000 <addFifteen>:
0: 300f adds r0, #15
2: b2c0 uxtb r0, r0
4: 4770 bx lr
Happens to be the same as gnu. Not expected that any two different compilers generate the same code, nor any expectation that any two versions of the same compiler generate the same code.
unoptimized, the input and local variables (none in this case) get a home on the stack. So what you are seeing is the input variable being put in its home on the stack as part of the setup of the function. Then the function itself wants to operate on that variable so, unoptimized, it needs to fetch that value from memory to create an intermediate variable (that in this case did not get a home on the stack) and so on. You see this with volatile variables as well. They will get written to memory then read back then modified then written to memory and read back, etc...
yes I agree, but I have not read the specs. End of the day it is gcc's calling convention or interpretation of some spec they choose to use. They have been doing this (not being aligned 100% of the time) for a long time and it does not fail. For all called functions they are aligned when the functions are called. Interrupts in arm code generated by gcc is not aligned all the time. Been this way since they adopted that spec.
by definition r0-r3, etc are volatile. The callee can modify them at will. The callee only needs to save/preserve them if IT needs them. In both the unoptimized and optimized cases only r0 matters for your function it is the input variable and it is used for the return value. You saw in the function I created that the input variable was preserved for later, even when optimized. But, by definition, the caller assumes these registers are destroyed by called functions, and called functions can destroy the contents of these registers and no need to save them.
As far as inline assembly goes, which is a different assembly language than "real" assembly language. I think you have a ways to go before being ready for that, but maybe not. After decades of constant bare metal work I have found zero real use cases for inline assembly, the cases I see are laziness avoiding allowing real assembly into the make system or ways to avoid writing real assembly language. I see it as a ghee whiz feature that folks use like unions and bitfields.
Within gnu, for arm, you have at least four incompatible assembly languages for arm. The not unified syntax real assembly, the unified syntax real assembly. The assembly language that you see when you use gcc to assemble instead of as and then inline assembly for gcc. Despite claims of compatibility clang arm assembly language is not 100% compatible with gnu assembly language and llvm/clang does not have a separate assembler you feed it to the compiler. Arms various toolchains over the years have completely incompatible assembly language to gnu for arm. This is all expected and normal. Assembly language is specific to the tool not the target.
Before you can get into inline assembly language learn some of the real assembly language. And to be fair perhaps you do, and perhaps quite well, and this question is about the discover of how compilers generate code, and how strange it looks as you find out that it is not some one to one thing (all tools in all cases generate the same output from the same input).
For inline asm, while you can specify registers, depending on what you are doing, you generally want to let the compiler choose the register, most of the work for inline assembly is not the assembly but the language that specific compiler uses to interface it...which is compiler specific, move to another compiler and the expectation is a whole new language to learn. While moving between assemblers is also a whole new language at least the syntax of the instructions themselves tend to be the same and the language differences are in everything else, labels and directives and such. And if lucky and it is a toolchain not just an assembler, you can look at the output of the compiler to start to understand the language and compare it to any documentation you can find. Gnus documentation is pretty bad in this case, so a lot of reverse engineering is needed. At the same time you are more likely to be successful with gnu tools over any other, not because they are better, in many cases they are not, but because of the sheer user base and the common features across targets and over decades of history.
I would get really good at interfacing asm with C by creating mock C functions to see which registers are used, etc. And/or even better, implement it in C, compile it, then hand modify/improve/whatever the output of the compiler (you do not need to be a guru to beat the compiler, to be as consistent, perhaps, but fairly often you can easily see improvements that can be made on the output of gcc, and gcc has been getting worse over the last several versions it is not getting better, as you can see from time to time on this site). Get strong in the asm for this toolchain and target and how the compiler works, and then perhaps learn the gnu inline assembly language.
I'm not sure there is a specific purpose to do it. it is just one solution that the compiler has found to do it.
For example the code:
unsigned int f(unsigned int a)
{
return sqrt(a + 1);
}
compiles with ARM GCC 9 NONE with optimisation level -O0 to:
push {r7, lr}
sub sp, sp, #8
add r7, sp, #0
str r0, [r7, #4]
ldr r3, [r7, #4]
adds r3, r3, #1
mov r0, r3
bl __aeabi_ui2d
mov r2, r0
mov r3, r1
mov r0, r2
mov r1, r3
bl sqrt
...
and in level -O1 to:
push {r3, lr}
adds r0, r0, #1
bl __aeabi_ui2d
bl sqrt
...
As you can see the asm is much easier to understand in -O1: store parameter in R0, add 1, call functions.
The hardware supports non aligned stack during exception. See here
The "caller saved" registers do not necessarily need to be stored on the stack, it's up to the caller to know whether it needs to store them or not.
Here you are mixing (if I understood correctly) C and assembly: so you have to do the compiler job before switching back to C: either you store values in callee saved registers (and then you know by convention that the compiler will store them during function call) or you store them yourself on the stack.
Here is the disassembly code which compiled from C:
00799d60 <sub_799d60>:
799d60: b573 push {r0, r1, r4, r5, r6, lr}
799d62: 0004 movs r4, r0
799d64: f000 e854 blx 799e10 <jmp_sub_100C54>
799d68: 4b15 ldr r3, [pc, #84] ; (799dc0 <sub_799d60+0x60>)
799d6a: 0005 movs r5, r0
799d6c: 4668 mov r0, sp
799d6e: 4798 blx r3
The target of the subroutine call (799d6e: 4798 blx r3) takes a 64 bit integer pointer argument and returns a 64 bit integer. And that routine is a library function, so I am not able to make any modifications on it.
Could this operation overwrite the stack which storages the lr and r6's value?
You say that the branch target "takes a 64 bit integer pointer argument and returns a 64 bit integer", but this is not the case. It takes a pointer to a 64-bit integer as its only argument (and this pointer is 32 bits long unless you're on aarch64, which I doubt given the rest of the code); and it returns nothing, it simply overwrites the 64-bit value pointed to by the argument you passed in. I'm sure this is what you meant, but be careful with your use of terminology, because the difference between these things is important! In particular there is no 64-bit argument passed either into our out of the function you're calling.
On to the question itself. The key to understanding what the compiler is doing here is to look at the very first line:
push {r0, r1, r4, r5, r6, lr}
The ARM calling convention doesn't require r0 and r1 to be call-preserved, so what are they doing in the list? The answer is that the compiler has added these 'dummy' pushes to create some space on the stack. The push operation above is essentially equivalent to
push {r4, r5, r6, lr}
sub sp, sp, #0x08
except that it saves an instruction. The result is not quite the same, of course, because whatever was in r0 and r1 ends up being written to these locations; but given that there's no way to know what was there beforehand, and the stacked values are about to get overwritten anyway, it's of no consequence. So we have, as a stack frame,
lr
r6
r5
r4
(r1)
sp -> (r0)
with the stack pointer pointing at the space created by the dummy push of r0 and r1. Now we just have
mov r0, sp
which copies the stack pointer to r0 to use as the pointer argument to the function you're calling, which will then overwrite the two words at this location to result in a stack frame of
lr
r6
r5
r4
(64-bit value, high word)
sp -> (64-bit value, low word)
You haven't shown any code beyond the blx r3, so it's not possible to say exactly what happens to the stack at the end of the function. But if this function returns no arguments, I would expect to see a matching
pop {r0, r1, r4, r5, r6, pc}
which will, of course, result in your 64-bit result being left in r0 and r1. But these registers are call-clobbered according to the calling convention so there's no problem.
here is my question.
Is there a good way to uses global context structures in embedded c program ?
I mean is it better to pass them in parameters of function or directly use the global reference inside the function ? Or there is no differences ?
Example:
Context_t myContext; // is a structure with a lot of members
void function1(Context_t *ctx)
{
ctx->x = 1;
}
or
void function2(void)
{
myContext.x = 1;
}
Thanks.
Where to allocate variables is a program design decision, not a performance decision.
On modern systems there is not going to be much of a performance difference between your two versions.
When passing a lot of different parameters, rather than just one single pointer as in this case, there could be a performance difference. Older systems, most notably 8 bit MCUs with crappy compilers, could benefit quite a lot from using file scope variables when it comes to performance. Mostly since old legacy architectures like PIC, AVR, HC08, 8051 etc had very limited stack and register resources. If you have to maintain such old stuff, then file scope variables will improve performance.
That being said, you should allocate variables where it makes since. If the purpose of your code unit is to process Context_t allocated elsewhere, it should get passed as a pointer. If Context_t is private data that the caller does not need to know about, you could allocate it at file scope.
Please note that there is never a reason to declare "global" variables at file scope. All your file scope variables should have internal linkage. That is, they should be declared as static. This is perfectly fine practice in most embedded systems, particularly single core, bare metal MCU applications.
However, note that file scope variables are not thread-safe and causes complications on multi-process systems. If you are for example using a RTOS, you should minimize the amount of such variables.
Strictly to your question. If you are going to have the global then use it as a global directly. Having one function use it as a global and then pass it down requires setup on the caller, the consumption of the resource (register or stack) for the parameter, and slight savings on the function itself:
typedef struct
{
unsigned int a;
unsigned int b;
unsigned int c;
unsigned int d;
unsigned int e;
unsigned int f;
unsigned int g;
unsigned int h;
unsigned int i;
unsigned int j;
} SO_STRUCT;
SO_STRUCT so;
unsigned int fun1 ( SO_STRUCT s )
{
return(s.a+s.g);
}
unsigned int fun2 ( SO_STRUCT *s )
{
return(s->a+s->g);
}
unsigned int fun3 ( void )
{
return(so.a+so.g);
}
Disassembly of section .text:
00000000 <fun1>:
0: e24dd010 sub sp, sp, #16
4: e24dc004 sub r12, sp, #4
8: e98c000f stmib r12, {r0, r1, r2, r3}
c: e59d3018 ldr r3, [sp, #24]
10: e59d0000 ldr r0, [sp]
14: e28dd010 add sp, sp, #16
18: e0800003 add r0, r0, r3
1c: e12fff1e bx lr
00000020 <fun2>:
20: e5902000 ldr r2, [r0]
24: e5900018 ldr r0, [r0, #24]
28: e0820000 add r0, r2, r0
2c: e12fff1e bx lr
00000030 <fun3>:
30: e59f300c ldr r3, [pc, #12] ; 44 <fun3+0x14>
34: e5930000 ldr r0, [r3]
38: e5933018 ldr r3, [r3, #24]
3c: e0800003 add r0, r0, r3
40: e12fff1e bx lr
44: 00000000 andeq r0, r0, r0
the caller to fun2 would have to load the address of the struct to pass it in so in this case the extra consumption is we lost a register as a parameter, since there were so few parameters, it was a wash, for a single call from a single higher function. if you continued to nest this the best you could do is keep handing down the register:
unsigned int funx ( SO_STRUCT *s );
unsigned int fun2 ( SO_STRUCT *s )
{
return(funx(s)+3);
}
Disassembly of section .text:
00000000 <fun2>:
0: e92d4010 push {r4, lr}
4: ebfffffe bl 0 <funx>
8: e8bd4010 pop {r4, lr}
c: e2800003 add r0, r0, #3
10: e12fff1e bx lr
so no matter whether the struct was originally global or local to some function, in this case if I call the next function and pass by reference the first caller has to setup the parameter, in this case with arm that is a register r0, so stack pointer math or a load of an address into r0. r0 goes to fun2() and can be used directly by reference to get at items assuming the function is simple enough it doesnt have to evict out to the stack. Then calling funx() with the same pointer, fun2 does NOT have to load r0 (in this simplified doesnt get too much better than this case) and funx() can reference items from r0 directly. had fun2 and funx used the global directly they both would resemble fun3 above where each function would have a load to get the address and a word to store the address
one would hope multiple functions in a file would share but dont make that assumption:
unsigned int fun3 ( void )
{
return(so.a+so.g);
}
unsigned int funz ( void )
{
return(so.a+so.h);
}
00000000 <fun3>:
0: e59f300c ldr r3, [pc, #12] ; 14 <fun3+0x14>
4: e5930000 ldr r0, [r3]
8: e5933018 ldr r3, [r3, #24]
c: e0800003 add r0, r0, r3
10: e12fff1e bx lr
14: 00000000 andeq r0, r0, r0
00000018 <funz>:
18: e59f300c ldr r3, [pc, #12] ; 2c <funz+0x14>
1c: e5930000 ldr r0, [r3]
20: e593301c ldr r3, [r3, #28]
24: e0800003 add r0, r0, r3
28: e12fff1e bx lr
2c: 00000000 andeq r0, r0, r0
as your function gets more complicated though this optimization goes away (simply passing r0 down as the first parameter). So you end up storing and then retreiving the address to the struct so it costs a stack location and a store and some loads where direct to the global would be a flash/.text location and a load, so slightly cheaper.
if on a system where the parameters are on the stack then continuing to pass the pointer does not have a chance at that optimization you have to keep copying the pointer to the stack for each nested call...
So as far as your direct question there is no correct answer other than it depends. And you would need to be really really tight on a performance or resource budget to worry about a premature optimization like that.
As far as consumption, globals have the benefit on a very tightly constrained system of being fixed and known at compile time what their consumption is. Where having local variables as a habit in particular structures, is going to create a lot of stack use which is dynamic and much harder to measure (can change each line of code you add or remove too, so spend a week trying to determine the use, then add a line and you could gain nothing to a few percent to tens of percent). At the same time a one time or few time use variable or structure MIGHT be better served locally, depends on how deep in the nested functions, if at the end then doesnt cost much if declared locally at the top function then it costs the same as being global but is now on the stack and not measured at compile time. One struct, ehhh, no biggie, a habit, that is when it matters.
So to your specific question it cannot be determined ahead of time and cannot make a general rule that it is "faster" to pass by reference or use directly as one can easily create use cases that demonstrate each being true. The wee bitty improvement would come from knowing your memory consumption at compile time (global) vs runtime (local). But your question was not about local vs global was about access to the global.
Much better to pass a reference to the structure than modify the structure as a global. Passing a reference makes it visible the function is (potentially) changing the structure.
From a performance standpoint there won't be a measurable difference.
If the number of structure accesses is significant, passing the reference can also result in significantly smaller code.
Global variables are generally preferred to be avoided, there are plenty of reasons to it. While with global variables some find it easy to share a single resource between many functions, but there are flipsides to it. Be it ease of code understanding, be it dependencies and tight coupling of variables. Many a times we end up using libraries, and with modules that are linked dynamically, it is troublesome if different libraries have their own instances of global variables.
So, with direct reference to your question, I would prefer
void function1(Context_t *ctx)
against anything that involves changing a global variable.
But again, if the necessary precautions are taken in terms of tight coupling of global variables and functions, it is okay to go with existing implementation which has global variables, rather than scrapping the whole tested thing and start off again.
I am trying to print all the elements of an array using the write() system call. I haven't used write a whole lot before but from what I understand, the parameters I am passing to write seem to be correct. I know that write takes:
The file descriptor of the file (1 means standard output).
The buffer from where data is to be written into the file.
The number of bytes to be read from the buffer.
Here is my code:
mov r3, #0 /*Offset (also acts as LVC)*/
print: mov r0, #1 /*Indicates standard output*/
ldr r4, =array /*Set r4 to the address of array*/
ldr r5, [r3,r4] /*Add offset to array address*/
ldr r1, [r6] /*Element of array to write*/
mov r2, #1 /*Write 1 byte*/
bl write
add r3, r3, #1 /*Increase offset each iteration*/
cmp r3, #41
blt print
Does this look correct? Is it likely that my problem is elsewhere in my program?
No. You want to pass the address where the data to write are in r1, not the value itself.
Therefore r1 should be set to just <address-of-array> + <index>, i.e.:
ldr r4, =array /*Set r4 to the address of array*/
add r1, r3, r4 /*Add offset to point to array item */
It crashed for you, because you tried to read from memory at an invalid address -- the value of the array item. You were reading a word (ldr r5, [r3,r4]), not byte, from the array at index r3, then trying to read another word (not byte) from that address.
It is not relevant in this case, but just for reference, you would use lrdb to read a single byte.
Also the "invalid address" above might be both that it is undefined and falls outside of any mapped region, but also that it is improperly aligned. The ARM architecture disallows reading a word, e.g. a 32 bit value, from address not aligned at those 32-bits (4 bytes). For r3 == 1 in the second iteration, this wouldn't apply (assuming array would start on a 32-bit boundary).
In GCC you can use a computed goto by taking the address of a label (as in void *addr = &&label) and then jumping to it (jump *addr). The GCC manual says you can jump to this address from anywhere in the function, it's only that jumping to it from another function is undefined.
When you jump to the code it cannot assume anything about the values of registers, so presumably it reloads them from memory. However the value of the stack pointer is also not necessarily defined, for example you could be jumping from a nested scope which declares extra variables.
The question is how does GCC manage to set to value of the stack pointer to the correct value (it may be too high or too low)? And how does this interact with -fomit-frame-pointer (if it does)?
Finally, for extra points, what are the real constraints about where you can jump to a label from? For example, you could probably do it from an interrupt handler.
In general, when you have a function with labels whose address is taken, gcc needs to ensure that you can jump to that label from any indirect goto in the function -- so it needs to layout the stack so that the exact stack pointer doesn't matter (everything is indexed off the frame pointer), or that the stack pointer is consistent across all of them. Generally, this means it allocates a fixed amount of stack space when the function starts and never touches the stack pointer afterwards. So if you have inner scopes with variables, the space will be allocated at function start and freed at function end, not in the inner scope. Only the constructor and destructor (if any) need to be tied to the inner scope.
The only constraint on jumping to labels is the one you noted -- you can only do it from within the function that contains the labels. Not from any other stack frame of any other function or interrupt handler or anything.
edit
If you want to be able to jump from one stack frame to another, you need to use setjmp/longjmp or something similar to unwind the stack. You could combine that with an indirect goto -- something like:
if (target = (void *)setjmp(jmpbuf)) goto *target;
that way you could call longjmp(jmpbuf, label_address); from any called function to unwind the stack and then jump to the label. As long as setjmp/longjmp works from an interrupt handler, this will also work from an interrupt handler. Also depends on sizeof(int) == sizeof(void *), which is not always the case.
I don't think that the fact that the goto's are computed add to the effect that it has on local variables. The lifetime of local variable starts from entering their declaration at or beyond their declaration and ends when the scope of the variable cannot be reached in any way. This includes all different sorts of control flow, in particular goto and longjmp. So all such variables are always safe, until the return from the function in which they are declared.
Labels in C are visible to the whole englobing function, so it makes not much difference if this is a computed goto. You could always replace a computed goto with a more or less involved switch statement.
One notable exception from this rule on local variables are variable length arrays, VLA. Since they do necessarily change the stack pointer, they have different rules. There lifetime ends as soon as you quit their block of declaration and goto and longjmp are not allowed into scopes after a declaration of a variably modified type.
In the the function prologue the current position of the stack is saved in a callee saved register even with -fomit-frame-pointer.
In the below example the sp+4 is stored in r7 and then in the epilogue (LBB0_3) is restored (r7+4 -> r4; r4 -> sp). Because of this you can jump anywhere within the function, grow the stack at any point in the function and not screw up the stack. If you jump out of the function (via jump *addr) you will skip this epilogue and royally screw up the stack.
Short example which also uses alloca which dynamically allocates memory on the stack:
clang -arch armv7 -fomit-frame-pointer -c -S -O0 -o - stack.c
#include <alloca.h>
int foo(int sz, int jmp) {
char *buf = alloca(sz);
int rval = 0;
if( jmp ) {
rval = 1;
goto done;
}
volatile int s = 2;
rval = s * 5;
done:
return rval;
}
and disassembly:
_foo:
# BB#0:
push {r4, r7, lr}
add r7, sp, #4
sub sp, #20
movs r2, #0
movt r2, #0
str r0, [r7, #-8]
str r1, [r7, #-12]
ldr r0, [r7, #-8]
adds r0, #3
bic r0, r0, #3
mov r1, sp
subs r0, r1, r0
mov sp, r0
str r0, [r7, #-16]
str r2, [r7, #-20]
ldr r0, [r7, #-12]
cmp r0, #0
beq LBB0_2
# BB#1:
movs r0, #1
movt r0, #0
str r0, [r7, #-20]
b LBB0_3
LBB0_2:
movs r0, #2
movt r0, #0
str r0, [r7, #-24]
ldr r0, [r7, #-24]
movs r1, #5
movt r1, #0
muls r0, r1, r0
str r0, [r7, #-20]
LBB0_3:
ldr r0, [r7, #-20]
subs r4, r7, #4
mov sp, r4
pop {r4, r7, pc}