Is compiler using relative addresses for stack variables in the scope? - c

I've programmed a task scheduler for STM32 (ARM) MCU which enables multitasking. I can create concurrent tasks, each having an own allocated stack, as this is the most straightforward way to do it. These stacks are allocated on the heap. Since these stacks are static in size, which is a very inefficient way to use memory space, I'm planning to add dynamic stack reallocation.
My question is: If I reallocate a task's stack on another memory address and copy all stack contents and update the task (ie. stack pointer), can the task continue running without any problem, if I'm not using in the task's code any absolute address of the stack? Is the C compiler using relative addressing only in the stack even if I'm taking an address of a variable in the scope?
Example:
void A() {
int i = 0;
int* iPtr = &i;
}
In the case above the value of iPtr will be a static address, or relative like currAddress-4? (If I'm not passing it to another function, just using it inside the scope.)
So is there any way the compiler uses relative address with offset in this scope, or just uses the direct address of the variable?
If there is relative address handling then I can reallocate the stack freely on another memory space, if not then I can't, which would be a problem.
I didn't really find any good documentation about this, so that also would be appreciated!
If not this is the right approach, then how stack reallocation for tasks used to be implemented?

Sorry to say it, but pointers are always absolute. You can confirm this by looking at the assembly of a small program that takes a pointer to a local variable and passes it to another function:
void set(int *p) {
*p = 42;
}
void bar(void) {
int l;
set(&l);
}
compiles to
set:
movs r3, #42
str r3, [r0] // store to absolute address in r0
bx lr
bar:
push {lr}
sub sp, sp, #12
add r0, sp, #4 // form absolute pointer to stack location [sp+4]
bl set
add sp, sp, #12
ldr pc, [sp], #4
https://godbolt.org/z/7EWoK3eda
This really couldn't work any other way: the function set has to compile to code that will do the right thing whether it is passed a pointer to the stack, or static memory, or heap. Also, an sp-relative pointer wouldn't even make sense when passed to a function with a different stack frame and hence a different value for sp.
So if you try to relocate the stack out from under a C program, it is almost certain to break.
Normally, the way you avoid wasting a ton of memory on stack is with virtual memory: you allocate a large block of virtual address space for the task's stack. Then you map physical memory into only a small part of it. As the stack grows and touches previously unused pages, the MMU signals a page fault, which your OS handles by allocating and mapping another page of physical memory. Because the mapping of virtual to physical pages is arbitrary, the stack remains at a fixed virtual address whereas the physical memory underneath need not be contiguous. It could even be copied and remapped while the task is suspended.

Related

Where do the values of uninitialized variables come from, in practice on real CPUs?

I want to know the way variables are initialized :
#include <stdio.h>
int main( void )
{
int ghosts[3];
for(int i =0 ; i < 3 ; i++)
printf("%d\n",ghosts[i]);
return 0;
}
this gets me random values like -12 2631 131 .. where did they come from?
For example with GCC on x86-64 Linux: https://godbolt.org/z/MooEE3ncc
I have a guess to answer my question, it could be wrong anyways:
The registers of the memory after they are 'emptied' get random voltages between 0 and 1, these values get 'rounded' to 0 or 1, and these random values depend on something?! Maybe the way registers are made? Maybe the capacity of the memory comes into play somehow? And maybe even the temperature?!!
Your computer doesn't reboot or power cycle every time you run a new program. Every bit of storage in memory or registers your program can use has a value left there by some previous instruction, either in this program or in the OS before it started this program.
If that was the case, e.g. for a microcontroller, yes, each bit of storage might settle into a 0 or 1 state during the voltage fluctuations of powering on, except in storage engineered to power up in a certain state. (DRAM is more likely to be 0 on power-up, because its capacitors will have discharged). But you'd also expect there to be internal CPU logic that does some zeroing or setting of things to guaranteed state before fetching and executing the first instruction of code from the reset vector (a memory address); system designers normally arrange for there to be ROM at that physical address, not RAM, so they can put non-random bytes of machine-code there. Code that executes at that address should probably assume random values for all registers.
But you're writing a simple user-space program that runs under an OS, not the firmware for a microcontroller, embedded system, or mainstream motherboard, so power-up randomness is long in the past by the time anything loads your program.
Modern OSes zero registers on process startup, and zero memory pages allocated to user-space (including your stack space), to avoid information leaks of kernel data and data from other processes. So the values must come from something that happened earlier inside your process, probably from dynamic linker code that ran before main and used some stack space.
Reading the value of a local variable that's never been initialized or assigned is not actually undefined behaviour (in this case because it couldn't have been declared register int ghosts[3], that's an error (Godbolt) because ghosts[i] effectively uses the address) See (Why) is using an uninitialized variable undefined behavior? In this case, all the C standard has to say is that the value is indeterminate. So it does come down to implementation details, as you expected.
When you compile without optimization, compilers don't even notice the UB because they don't track usage across C statements. (This means everything is treated somewhat like volatile, only loading values into registers as needed for a statement, then storing again.)
In the example Godbolt link I added to your question, notice that -Wall doesn't produce any warnings at -O0, and just reads from the stack memory it chose for the array without ever writing it. So your code is observing whatever stale value was in memory when the function started. (But as I said, that must have been written earlier inside this program, by C startup code or dynamic linking.)
With gcc -O2 -Wall, we get the warning we'd expect: warning: 'ghosts' is used uninitialized [-Wuninitialized], but it does still read from stack space without writing it.
Sometimes GCC will invent a 0 instead of reading uninitialized stack space, but it happens not in this case. There's zero guarantee about how it compiles the compiler sees the use-uninitialized "bug" and can invent any value it wants, e.g. reading some register it never wrote instead of that memory. e.g. since you're calling printf, GCC could have just left ESI uninitialized between printf calls, since that's where ghost[i] is passed as the 2nd arg in the x86-64 System V calling convention.
Most modern CPUs including x86 don't have any "trap representations" that would make an add instruction fault, and even if it did the C standard doesn't guarantee that the indeterminate value isn't a trap representation. But IA-64 did have a Not A Thing register result from bad speculative loads, which would trap if you tried to read it. See comments on the trap representation Q&A - Raymond Chen's article: Uninitialized garbage on ia64 can be deadly.
The ISO C rule about it being UB to read uninitialized variables that were candidates for register might be aimed at this, but with optimization enabled you could plausibly still run into this anyway if the taking of the address happens later, unless the compiler takes steps to avoid it. But ISO C defect report N1208 proposes saying that an indeterminate value can be "a value that behaves as if it were a trap representation" even for types that have no trap representations. So it seems that part of the standard doesn't fully cover ISAs like IA-64, the way real compilers can work.
Another case that's not exactly a "trap representation": note that only some object-representations (bit patterns) are valid for _Bool in mainstream ABIs, and violating that can crash your program: Does the C++ standard allow for an uninitialized bool to crash a program?
That's a C++ question, but I verified that GCC will return garbage without booleanizing it to 0/1 if you write _Bool b[2] ; return b[0]; https://godbolt.org/z/jMr98547o. I think ISO C only requires that an uninitialized object has some object-representation (bit-pattern), not that it's a valid one for this object (otherwise that would be a compiler bug). For most integer types, every bit-pattern is valid and represents an integer value. Besides reading uninitialized memory, you can cause the same problem using (unsigned char*) or memcpy to write a bad byte into a _Bool.
An uninitialized local doesn't have "a value"
As shown in the following Q&As, when compiling with optimization, multiple reads of the same uninitialized variable can produce different results:
Is uninitialized local variable the fastest random number generator?
What happens to a declared, uninitialized variable in C? Does it have a value?
The other parts of this answer are primarily about where a value comes from in un-optimized code, when the compiler doesn't really "notice" the UB.
The registers of the memory after they are 'emptied' get random voltages between 0 and 1,
Nothing so mysterious. You are just seeing what was written to those memory locations last time they were used.
When memory is released it is not cleared or emptied. The system just knows that its free and the next time somebody needs memory it just gets handed over, the old contents are still there. Its like buying an old car and looking in the glove compartment, the contents are not mysterious, its just a surprise to find a cigarette lighter and one sock.
Sometimes in a debugging environment freed memory is cleared to some identifiable value so that its easy to recognize that you are dealing with uninitialized memory. For examples 0xccccccccccc or maybe 0xdeadbeefDeadBeef
Maybe a better analogy. You are eating in a self serve restaurant that never cleans its plates, when a customer has finished they put the plates back on the 'free' pile. When you go to serve yourself you pick up the top plate from the free pile. You should clean the plate otherwise you get what was left there by previous customer
I am going to use a platform that is easy to see what is going on. The compilers and platforms work the same way independent of architecture, operating system, etc. There are exceptions of course...
In main am going to call this function:
test();
Which is:
extern void hexstring ( unsigned int );
void test ( void )
{
unsigned int x[3];
hexstring(x[0]);
hexstring(x[1]);
hexstring(x[2]);
}
hexstring is just a printf("%008X\n",x).
Build it (not using x86, using something that is overall easier to read for this demonstration)
test.c: In function ‘test’:
test.c:7:2: warning: ‘x[0]’ is used uninitialized in this function [-Wuninitialized]
7 | hexstring(x[0]);
| ^~~~~~~~~~~~~~~
test.c:8:2: warning: ‘x[1]’ is used uninitialized in this function [-Wuninitialized]
8 | hexstring(x[1]);
| ^~~~~~~~~~~~~~~
test.c:9:2: warning: ‘x[2]’ is used uninitialized in this function [-Wuninitialized]
9 | hexstring(x[2]);
| ^~~~~~~~~~~~~~~
The disassembly of the compiler output shows
00010134 <test>:
10134: e52de004 push {lr} ; (str lr, [sp, #-4]!)
10138: e24dd014 sub sp, sp, #20
1013c: e59d0004 ldr r0, [sp, #4]
10140: ebffffdc bl 100b8 <hexstring>
10144: e59d0008 ldr r0, [sp, #8]
10148: ebffffda bl 100b8 <hexstring>
1014c: e59d000c ldr r0, [sp, #12]
10150: e28dd014 add sp, sp, #20
10154: e49de004 pop {lr} ; (ldr lr, [sp], #4)
10158: eaffffd6 b 100b8 <hexstring>
We can see that the stack area is allocated:
10138: e24dd014 sub sp, sp, #20
But then we go right into reading and printing:
1013c: e59d0004 ldr r0, [sp, #4]
10140: ebffffdc bl 100b8 <hexstring>
So whatever was on the stack. Stack is just memory with a special hardware pointer.
And we can see the other two items in the array are also read (load) and printed.
So whatever was in that memory at this time is what gets printed. Now the environment I am in likely zeroed the memory (including stack) before we got there:
00000000
00000000
00000000
Now I am optimizing this code to make it easier to read, which adds a few challenges.
So what if we did this:
test2();
test();
In main and:
void test2 ( void )
{
unsigned int y[3];
y[0]=1;
y[1]=2;
y[2]=3;
}
test2.c: In function ‘test2’:
test2.c:5:15: warning: variable ‘y’ set but not used [-Wunused-but-set-variable]
5 | unsigned int y[3];
|
and we get:
00000000
00000000
00000000
but we can see why:
00010124 <test>:
10124: e52de004 push {lr} ; (str lr, [sp, #-4]!)
10128: e24dd014 sub sp, sp, #20
1012c: e59d0004 ldr r0, [sp, #4]
10130: ebffffe0 bl 100b8 <hexstring>
10134: e59d0008 ldr r0, [sp, #8]
10138: ebffffde bl 100b8 <hexstring>
1013c: e59d000c ldr r0, [sp, #12]
10140: e28dd014 add sp, sp, #20
10144: e49de004 pop {lr} ; (ldr lr, [sp], #4)
10148: eaffffda b 100b8 <hexstring>
0001014c <test2>:
1014c: e12fff1e bx lr
test didn't change but test2 is dead code as one would expect when optimized, so it did not actually touch the stack. But what if we:
test2.c
void test3 ( unsigned int * );
void test2 ( void )
{
unsigned int y[3];
y[0]=1;
y[1]=2;
y[2]=3;
test3(y);
}
test3.c
void test3 ( unsigned int *x )
{
}
Now
0001014c <test2>:
1014c: e3a01001 mov r1, #1
10150: e3a02002 mov r2, #2
10154: e3a03003 mov r3, #3
10158: e52de004 push {lr} ; (str lr, [sp, #-4]!)
1015c: e24dd014 sub sp, sp, #20
10160: e28d0004 add r0, sp, #4
10164: e98d000e stmib sp, {r1, r2, r3}
10168: eb000001 bl 10174 <test3>
1016c: e28dd014 add sp, sp, #20
10170: e49df004 pop {pc} ; (ldr pc, [sp], #4)
00010174 <test3>:
10174: e12fff1e bx lr
test2 is actually putting stuff on the stack. Now the calling conventions generally require that the stack pointer is back where it started when you were called, which means function a might move the pointer and read/write some data in that space, call function b move the pointer, read/write some data in that space, and so on. Then when each function returns it does not make sense usually to clean up, you just move the pointer back and return whatever data you wrote to that memory remains.
So if test 2 writes a few things to the stack memory space and then returns then another function is called at the same level as test2. Then the stack pointer is at the same address when test() is called as when test2() was called, in this example. So what happens?
00000001
00000002
00000003
We have managed to control what test() is printing out. Not magic.
Now rewind back to the 1960s and then work forward to the present, particularly 1980s and later.
Memory was not always cleaned up before your program ran. As some folks here are implying if you were doing banking on a spreadsheet then you closed that program and opened this program...back in the day...you would almost expect to see some data from that spreadsheet program, maybe the binary maybe the data, maybe something else, due to the nature of the operating systems use of memory it may be a fragment of the last program you ran, and a fragment of the one before that, and a fragment of a program still running that just did a free(), and so on.
Naturally, once we started to get connected to each other and hackers wanted to take over and send themselves your info or do other bad things, you can see how trivial it would be to write a program to look for passwords or bank accounts or whatever.
So not only do we have protections today to prevent one program sniffing around in another programs space, we generally assume that, today, before our program gets some memory that was used by some other program, it is wiped.
But if you disassemble even a simple hello world printf program you will see that there is a fair amount of bootstrap code that happens before main() is called. As far as the operating system is concerned, all of that code is part of our one program so even if (let's assume) memory were zeroed or cleaned before the OS loads and launches our program. Before main, within our program, we are using the stack memory to do stuff, leaving behind values, that a function like test() will see.
You may find that each time you run the same binary, one compile many runs, that the "random" data is the same. Now you may find that if you add some other shared library call or something to the overall program, then maybe, maybe, that shared library stuff causes extra code pre-main to happen to try to be able to call the shared code, or maybe as the program runs it takes different paths now because of a side effect of a change to the overall binary and now the random values are different but consistent.
There are explanations why the values could be different each time from the same binary as well.
There is no ghost in the machine though. Stack is just memory, not uncommon when a computer boots to wipe that memory once if for no other reason than to set the ecc bits. After that that memory gets reused and reused and reused and reused. And depending on the overall architecture of the operating system. How the compiler builds your application and shared libraries. And other factors. What happens to be in memory where the stack pointer is pointing when your program runs and you read before you write (as a rule never read before you write, and good that compilers are now throwing warnings) is not necessarily random and the specific list of events that happened to get to that point, were not just random but controlled, are not values that you as the programmer may have predicted. Particularly if you do this at the main() level as you have. But be it main or seventeen levels of nested function calls, it is still just some memory that may or may not contain some stuff from before you got there. Even if the bootloader zeros memory, that is still a written zero that was left behind from some other program that came before you.
There are no doubt compilers that have features that relate to the stack that may do more work like zero at the end of the call or zero up front or whatever for security or some other reason someone thought of.
I would assume today that when an operating system like Windows or Linux or macOS runs your program it is not giving you access to some stale memory values from some other program that came before (spreadsheet with my banking information, email, passwords, etc). But you can trivially write a program to try (just malloc() and print or do the same thing you did but bigger to look at the stack). I also assume that program A does not have a way to get into program B's memory that is running concurrently. At least not at the application level. Without hacking (malloc() and print is not hacking in my use of the term).
The array ghosts is uninitialized, and because it was declared inside of a function and is not static (formally, it has automatic storage duration), its values are indeterminate.
This means that you could read any value, and there's no guarantee of any particular value.

Determining return address of function on ARM Cortex-M series

I want to determine the return address of a function in Keil. I opened diassembly section at debugging mode in Keil uvision. What is shown is some assembly code like this:
My intention is to inject a simple binary code to microcontroller via using buffer overflow at microcontroller.see: Buffer overflow
I want to determine the return address of "test" function . Is it a must to know how to read assembly code or are there any trick to find the return address?
I am newbie to assembly.
R14 or in other name LR hold the return address. On the left you can see it in the picture. It is 0x08000287.
When a function is called, R14 will be overwritten with the address following the call ("BL" or "BLX") instruction. If that function doesn't call any other functions, R14 will often be left holding the return address for its duration. Further, if the function tail-calls another function, the tail call may be replaced with a branch ("B" or "BX"), with R14 holding the return address of the original caller. If a function makes a non-tail call to another function, it will be necessary to save R14 "somewhere" (typically the stack, but possibly to another previously-used caller-saved register) at some time before that, and retrieve that value from the stack at some later time, but if optimizations are enabled the location where R14 is saved will generally be unpredictable.
Some compilers may have a mode that would stack things consistently enough to be usable, but code will be very compiler-dependent. The technique most likely to be successful may be to do something like:
extern int getStackAddress(uint8_t **addr); // Always returns zero
void myFunction(...whavever...)
{
uint8_t *returnAddress;
if (getStackAddress(&returnAddress)) return; // Put this first.
}
where the getStackAddress would be a machine-code function that stores R14 to the address in R0, loads R0 with zero, and then branches to R14. There are relatively few code sequences that would be likely to follow that, and if a code examines instructions at the address stored in returnAddress and recognizes one of these code sequences, it would know that the return address for myFunction is stored in a spot appropriate for the sequence in question. For example, if it sees:
test r0,r0
be ...
pop {r0,pc}
It would know that the caller's address is second on the stack. Likewise if it sees:
cmp r0,#0
bne somewhere:
somewhere: ; Compute address based on lower byte of bne
pop {r0,r1,r2,r4,r5,pc}
then it would know that the caller's address is sixth.
There are a few instructons compilers could use to test a register against zero, and some compilers might use be while others use bne, but for the code above compilers would be likely to use the above pattern, and so counting how many bits are set in the pop instruction would reveal the whereabouts of the return address on the stack. One wouldn't know until runtime whether this test would actually work, but in cases where it claims to identify the return address it should actually be correct.
You can find all the answers in the Cortex-M documentation
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0337h/Chdedegj.html

How assembly accesses/stores variables on the stack

In assembly You can store data in registers or on the stack. Only the top of the stack can be accessed at any given moment (right?). Consider the following C code:
main(){
int x=2;
func();
}
func( int x ){
int i;
char a;
}
Upon calling func() the following is pushed onto the stack (consider a 32bit system):
variable x (4 bytes, pushed by main)
<RETURN ADDRESS> (4 bytes pushed by main?)
<BASE POINTER> (4 bytes pushed by func())
variable i (4 bytes, pushed by func())
variable a (1 byte, pushed by func())
I have the following questions:
In C code you can access the local variable from anywhere inside the function, but in assembly you can only access the top of the stack. The C code is translated into assembly (in machine code but assembly is the readable form of it). So how does assembly support the reading of variables that are not on top of the stack?
Did I leave out anything that would also be pushed to the stack in my example?
In assembly if you push a char on the stack or an int, how can it determine whethere it needs to push 4 bytes or 1 byte? Because it uses the same operation (push) right?
Thanks in advance
Gr. Maricruzz
The stack pointer at the beginning of the function is put into a register, and then the variables/arguments are accessed via this base address plus the offset for the variable.
If you want to see the code, instead of creating object files, let the compiler stop at creating assembler files. Then you can see exactly how it works. (Of course, that requires you to have a valid C program, unlike the one you have in the question now.)
The compiler is generating the assembly, each instruction set may differ but at the end of the day the stack is just a register holding an address to memory. The compiler is creating and knows the whole scope of the function it is creating and knows how far down to find each data item on the stack for local data items, so it will then create the appropriate code based on that instruction set to access those local items.
Some instruction sets you need to make a copy of the stack pointer and/or do math with the stack pointer as an operand but some other register as a result of that math, then based on that math (stack pointer + 8 words for example) you access that memory address. Some instruction sets have an addressing mode where you can in the load or store apply an offset to the stack pointer the math is done as part of the instruction execution and you dont have to use an intermediate result and a register.
Only the top of the stack can be accessed at any given moment (right?)
No, generally the ISA has instructions to access other elements on the stack as well. That is, accessing elements on the stack is not limited to push and pop like operations; typically you can just mov things back and forth between a stack location and a register.
Assembly can accesss any memory by address (just like C).
Simple, not optimized programs would put all local variables on stack before method execution, so variables addresses are address of execution frame plus some shift.
Then program can simple use pop and push method to store additional variables (i.e. subresults of some expression) on the top of the stack.
Summary:
There is register (ESP in x86) pointing to the top of the stack
Calling push is moving variable to the top of the stack and increasing this register
Calling pop is moving variable from the top of the stack and decreasing this register
Calling mov is moving variable between memory and registers and do nothing to stack register (ESP).

Arm assembly - multiple push/pop order and SP

I've seen an annotation for pushing/popping multiple registers in the same line, e.g:
push {fp, lr}
I couldn't find out who is pushed first - fp or lr?
An additional question - does SP point to the last occupied address in the stack or the first free one?
From the ARM ARM:
The registers are stored in sequence, the lowest-numbered register to the lowest memory address (start_address), through to the highest-numbered register to the highest memory address (end_address)
On ARM, the stack pointer normally points to the last occupied address on the stack. For example, when setting up the initial stack pointer, you normally initialize with with the address one past the end of the stack.
PUSH is just a synonym for STMDB using sp as the base register. The DB indicates the 'decrement-before' addressing mode.

ARM: link register and frame pointer

I'm trying to understand how the link register and the frame pointer work in ARM. I've been to a couple of sites, and I wanted to confirm my understanding.
Suppose I had the following code:
int foo(void)
{
// ..
bar();
// (A)
// ..
}
int bar(void)
{
// (B)
int b1;
// ..
// (C)
baz();
// (D)
}
int baz(void)
{
// (E)
int a;
int b;
// (F)
}
and I call foo(). Would the link register contain the address for the code at point (A) and the frame pointer contain the address at the code at point (B)? And the stack pointer would could be any where inside bar(), after all the locals have been declared?
Some register calling conventions are dependent on the ABI (Application Binary Interface). The FP is required in the APCS standard and not in the newer AAPCS (2003). For the AAPCS (GCC 5.0+) the FP does not have to be used but certainly can be; debug info is annotated with stack and frame pointer use for stack tracing and unwinding code with the AAPCS. If a function is static, a compiler really doesn't have to adhere to any conventions.
Generally all ARM registers are general purpose. The lr (link register, also R14) and pc (program counter also R15) are special and enshrine in the instruction set. You are correct that the lr would point to A. The pc and lr are related. One is "where you are" and the other is "where you were". They are the code aspect of a function.
Typically, we have the sp (stack pointer, R13) and the fp (frame pointer, R11). These two are also related. This
Microsoft layout does a good job describing things. The stack is used to store temporary data or locals in your function. Any variables in foo() and bar(), are stored here, on the stack or in available registers. The fp keeps track of the variables from function to function. It is a frame or picture window on the stack for that function. The ABI defines a layout of this frame. Typically the lr and other registers are saved here behind the scenes by the compiler as well as the previous value of fp. This makes a linked list of stack frames and if you want you can trace it all the way back to main(). The root is fp, which points to one stack frame (like a struct) with one variable in the struct being the previous fp. You can go along the list until the final fp which is normally NULL.
So the sp is where the stack is and the fp is where the stack was, a lot like the pc and lr. Each old lr (link register) is stored in the old fp (frame pointer). The sp and fp are a data aspect of functions.
Your point B is the active pc and sp. Point A is actually the fp and lr; unless you call yet another function and then the compiler might get ready to setup the fp to point to the data in B.
Following is some ARM assembler that might demonstrate how this all works. This will be different depending on how the compiler optimizes, but it should give an idea,
; Prologue - setup
mov ip, sp ; get a copy of sp.
stmdb sp!, {fp, ip, lr, pc} ; Save the frame on the stack. See Addendum
sub fp, ip, #4 ; Set the new frame pointer.
...
; Maybe other functions called here.
; Older caller return lr stored in stack frame.
bl baz
...
; Epilogue - return
ldm sp, {fp, sp, lr} ; restore stack, frame pointer and old link.
... ; maybe more stuff here.
bx lr ; return.
This is what foo() would look like. If you don't call bar(), then the compiler does a leaf optimization and doesn't need to save the frame; only the bx lr is needed. Most likely this maybe why you are confused by web examples. It is not always the same.
The take-away should be,
pc and lr are related code registers. One is "Where you are", the other is "Where you were".
sp and fp are related local data registers.One is "Where local data is", the other is "Where the last local data is".
The work together along with parameter passing to create function machinery.
It is hard to describe a general case because we want compilers to be as fast as possible, so they use every trick they can.
These concepts are generic to all CPUs and compiled languages, although the details can vary. The use of the link register, frame pointer are part of the function prologue and epilogue, and if you understood everything, you know how a stack overflow works on an ARM.
See also: ARM calling convention.
MSDN ARM stack article
University of Cambridge APCS overview
ARM stack trace blog
Apple ABI link
The basic frame layout is,
fp[-0] saved pc, where we stored this frame.
fp[-1] saved lr, the return address for this function.
fp[-2] previous sp, before this function eats stack.
fp[-3] previous fp, the last stack frame.
many optional registers...
An ABI may use other values, but the above are typical for most setups. The indexes above are for 32 bit values as all ARM registers are 32 bits. If you are byte-centric, multiply by four. The frame is also aligned to at least four bytes.
Addendum: This is not an error in the assembler; it is normal. An explanation is in the ARM generated prologs question.
Disclaimer: I think this is roughly right; please correct as needed.
As indicated elsewhere in this Q&A, be aware that the compiler may not be required to generate (ABI) code that uses frame pointers. Frames on the call stack can often require useless information to be put there.
If the compiler options call for 'no frames' (a pseudo option flag), then the compiler can generate smaller code that keeps call stack data smaller. The calling function is compiled to only store the needed calling info on the stack, and the called function is compiled to only pop the needed calling information from the stack.
This saves execution time and stack space - but it makes tracing backwards in the calling code extremely hard (I gave up trying to...)
Info about the size and shape of the calling information on the stack is only known by the compiler and that info was thrown away after compile time.

Resources