LDMFD affects R13 oddly - arm

We are using arm9 with ucos. The OS_CPU_ARM_ExceptHndlr_BrkTask common porting function's last instrument has strange behavior in our system.
Instrument: LDMFD SP!,{R0-R12,LR,PC}^
Let's suppose the SP is 0x10002000, and the following 15 DWORDs (which will be copied to R0-R12, LR, PC) have values from 1 to 15. We find the PC (R15) is changed and jumps to 15, but the SP (R13) is changed to a strange value (an address far outside the stack memory space). I expected it would become 0x1000203C (0x10002000+4*15).
Why is R13 changed this way?

This instruction loads r14, like the other registers, from the stack. Write to PC causes the jump. This is not a branch and link that would set the return address to the link register.
Additionally, this instruction is actually an exception return (Because of the ^). So depending on the mode you are returning from, r14 might be banked. So after the exception return, you might see a different r14 than the one that was loaded from memory.

Related

Determining return address of function on ARM Cortex-M series

I want to determine the return address of a function in Keil. I opened diassembly section at debugging mode in Keil uvision. What is shown is some assembly code like this:
My intention is to inject a simple binary code to microcontroller via using buffer overflow at microcontroller.see: Buffer overflow
I want to determine the return address of "test" function . Is it a must to know how to read assembly code or are there any trick to find the return address?
I am newbie to assembly.
R14 or in other name LR hold the return address. On the left you can see it in the picture. It is 0x08000287.
When a function is called, R14 will be overwritten with the address following the call ("BL" or "BLX") instruction. If that function doesn't call any other functions, R14 will often be left holding the return address for its duration. Further, if the function tail-calls another function, the tail call may be replaced with a branch ("B" or "BX"), with R14 holding the return address of the original caller. If a function makes a non-tail call to another function, it will be necessary to save R14 "somewhere" (typically the stack, but possibly to another previously-used caller-saved register) at some time before that, and retrieve that value from the stack at some later time, but if optimizations are enabled the location where R14 is saved will generally be unpredictable.
Some compilers may have a mode that would stack things consistently enough to be usable, but code will be very compiler-dependent. The technique most likely to be successful may be to do something like:
extern int getStackAddress(uint8_t **addr); // Always returns zero
void myFunction(...whavever...)
{
uint8_t *returnAddress;
if (getStackAddress(&returnAddress)) return; // Put this first.
}
where the getStackAddress would be a machine-code function that stores R14 to the address in R0, loads R0 with zero, and then branches to R14. There are relatively few code sequences that would be likely to follow that, and if a code examines instructions at the address stored in returnAddress and recognizes one of these code sequences, it would know that the return address for myFunction is stored in a spot appropriate for the sequence in question. For example, if it sees:
test r0,r0
be ...
pop {r0,pc}
It would know that the caller's address is second on the stack. Likewise if it sees:
cmp r0,#0
bne somewhere:
somewhere: ; Compute address based on lower byte of bne
pop {r0,r1,r2,r4,r5,pc}
then it would know that the caller's address is sixth.
There are a few instructons compilers could use to test a register against zero, and some compilers might use be while others use bne, but for the code above compilers would be likely to use the above pattern, and so counting how many bits are set in the pop instruction would reveal the whereabouts of the return address on the stack. One wouldn't know until runtime whether this test would actually work, but in cases where it claims to identify the return address it should actually be correct.
You can find all the answers in the Cortex-M documentation
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0337h/Chdedegj.html

Very Baisc Arm Assembly Questions(add, compare)

TLDR: What exactly does bx lr do?
I have trouble understanding these two following examples:
*Add Example: *
I understand that the code "add r0, r0, r1" add r1 to r1 and stores it to register 0. What I do not understand is that how the code "bx lr" knows how
to return r0 without explicitly stating r0.
Compare Example:
Same here I understand that the code "BGT r0_Gt" compares if r0 > r1, and if this is true, the code will skip to r0_gt: However, how does bx lr know how to return the correct value?
It is defined by the used ABI; for ARM, this is EABI which states in "5.4 Result Return"
A Fundamental Data Type that is smaller than 4 bytes is zero- or sign-extended to a word and returned in r0.
http://infocenter.arm.com/help/topic/com.arm.doc.ihi0042f/IHI0042F_aapcs.pdf
bx lr doesn't return any register at all, it just passes control over back to the caller (in the address in the lr register), without modifying any other registers than pc.
The caller then knows, based on the calling convention, that on return, the return value will be in the r0 register (depending on the exact type of the return value and the platform's calling convention).
BX simply means branch exchange, it does a branch and can switch modes between arm/thumb if supported for that architecture. LR is a shortcut for register 14 its that simple. branch to the address in r14.
if you look at the bl instruction you see that r14 will be set with the address after the bl instruction, the return address from a function call.
The pair bl something then later bx lr (or mov pc,lr also works if you dont need to change modes and are in arm mode) is how you make function calls in arm.
The processor has very little concept of context (in an abstract sense). It does not know where it came from, what the registers are for, or if it is in a function call/subroutine. The higher level languages and compiler do know this, and use some common standards to make things easier.
A very small number of operations do have a special, well defined purpose. A BL instruction updates both the 'next instruction to execute' (otherwise known as PC or R15), but also magically updates R14 (the link register).
Exceptions (in V7-A) change a few of the banked core registers around, including the register which is usually used to access the stack, and the link register. This means that exceptions can happen without loosing track of everything else that was going on. Cortex M does things differently, and actually uses the stack to help with the banking (setting R14 to a 'magic value' to indicate if the most recent call was an exception or not).
Unless an instruction interacts with specific registers, CPSR specifically, it probably doesn't care about the context. Some operations (related to security) will be restricted so they can only happen in privileged states - this is ultimately used to prevent an operating system from the user applications, but usually these will relate to accessing very specific control registers.

Sorting ARM Assembly

I am newbie. I have difficulties with understanding memory ARM memory map.
I have found example of simple sorting algorithm
AREA ARM, CODE, READONLY
CODE32
PRESERVE8
EXPORT __sortc
; r0 = &arr[0]
; r1 = length
__sortc
stmfd sp!, {r2-r9, lr}
mov r4, r1 ; inner loop counter
mov r3, r4
sub r1, r1, #1
mov r9, r1 ; outer loop counter
outer_loop
mov r5, r0
mov r4, r3
inner_loop
ldr r6, [r5], #4
ldr r7, [r5]
cmp r7, r6
; swap without swp
strls r6, [r5]
strls r7, [r5, #-4]
subs r4, r4, #1
bne inner_loop
subs r9, r9, #1
bne outer_loop
ldmfd sp!, {r2-r9, pc}^
END
And this assembly should be called this way from C code
#define MAX_ELEMENTS 10
extern void __sortc(int *, int);
int main()
{
int arr[MAX_ELEMENTS] = {5, 4, 1, 3, 2, 12, 55, 64, 77, 10};
__sortc(arr, MAX_ELEMENTS);
return 0;
}
As far as I understand this code creates array of integers on the stack and calls _sortc function which implemented in assembly. This function takes this values from the stack and sorts them and put back on the stack. Am I right ?
I wonder how can I implement this example using only assembly.
For example defining array of integers
DCD 3, 7, 2, 8, 5, 7, 2, 6
BTW Where DCD declared variables are stored in the memory ??
How can I operate with values declared in this way ? Please explain how can I implement this using assembly only without any C code, even without stack, just with raw data.
I am writing for ARM7TDMI architecture
AREA ARM, CODE, READONLY - this marks start of section for code in the source.
With similar AREA myData, DATA, READWRITE you can start section where it's possible to define data like data1 DCD 1,2,3, this will compile as three words with values 1, 2, 3 in consecutive bytes, with label data1 pointing to the first byte of first word. (some AREA docs from google).
Where these will land in physical memory after loading executable depends on how the executable is linked (linker is using a script file which is helping him to decide which AREA to put where, and how to create symbol table for dynamic relocation done by the executable loader, by editing the linker script you can adjust where the code and data land, but normally you don't need to do that).
Also the linker script and assembler directives can affect size of available stack, and where it is mapped in physical memory.
So for your particular platform: google for memory mappings on web and check the linker script (for start just use linker option to produce .map file to see where the code and data are targeted to land).
So you can either declare that array in some data area, then to work with it, you load symbol data1 into register ("load address of data1"), and use that to fetch memory content from that address.
Or you can first put all the numbers into the stack (which is set probably to something reasonable by the OS loader of your executable), and operate in the code with the stack pointer to access the numbers in it.
You can even DCD some values into CODE area, so those words will end between the instructions in memory mapped as read-only by executable loader. You can read those data, but writing to them will likely cause crash. And of course you shouldn't execute them as instructions by accident (forgetting to put some ret/jump instruction ahead of DCD).
without stack
Well, this one is tricky, you have to be careful to not use any call/etc. and to have interrupts disabled, etc.. basically any thing what needs stack.
When people code a bootloader, usually they set up some temporary stack ASAP in first few instructions, so they can use basic stack functionality before setting up whole environment properly, or loading OS. A space for that temporary stack is often reserved somewhere in/after the code, or an unused memory space according to defined machine state after reset.
If you are down to the metal, without OS, usually all memory is writeable after reset, so you can then intermix code and data as you wish (just jumping around the data, not executing them by accident), without using AREA definitions.
But you should make your mind, whether you are creating application in user space of some OS (so you have things like stack and data areas well defined and you can use them for your convenience), or you are creating boot loader code which has to set it all up for itself (more difficult, so I would suggest at first going into user land of some OS, having C wrapper around with clib initialized is often handy too, so you can call things like printf from ASM for convenient output).
How can I operate with values declared in this way
It doesn't matter in machine code, which way the values were declared. All that matters is, if you have address of the memory, and if you know the structure, how the data are stored there. Then you can work with them in any way you want, using any instruction you want. So body of that asm example will not change, if you allocate the data in ASM, you will just pass the pointer as argument to it, like the C does.
edit: some example done blindly without testing, may need further syntax fixing to work for OP (or maybe there's even some bug and it will not work at all, let me know in comments if it did):
AREA myData, DATA, READWRITE
SortArray
DCD 5, 4, 1, 3, 2, 12, 55, 64, 77, 10
SortArrayEnd
AREA ARM, CODE, READONLY
CODE32
PRESERVE8
EXPORT __sortasmarray
__sortasmarray
; if "add r0, pc, #SortArray" fails (code too far in memory from array)
; then this looks like some heavy weight way of loading any address
; ldr r0, =SortArray
; ldr r1, =SortArrayEnd
add r0, pc, #SortArray ; address of array
; calculate array size from address of end
; (as I couldn't find now example of thing like "equ $-SortArray")
add r1, pc, #SortArrayEnd
sub r1, r1, r0
mov r1, r1, lsr #2
; do a direct jump instead of "bl", so __sortc returning
; to lr will actually return to called of this
b __sortc
; ... rest of your __sortc assembly without change
You can call it from C code as:
extern void __sortasmarray();
int main()
{
__sortasmarray();
return 0;
}
I used among others this Introducing ARM assembly language to refresh my ARM asm memory, but I'm still worried this may not work as is.
As you can see, I didn't change any thing in the __sortc. Because there's no difference in accessing stack memory, or "dcd" memory, it's the same computer memory. Once you have the address to particular word, you can ldr/str it's value with that address. The __sortc receives address of first word in array to sort in both cases, from there on it's just memory for it, without any context how that memory was defined in source, allocated, initialized, etc. As long as it's writeable, it's fine for __sortc.
So the only "dcd" related thing from me is loading array address, and the quick search for ARM examples shows it may be done in several ways, this add rX, pc, #label way is optimal, but does work only for +-4k range? There's also pseudo instruction ADR rX, #label doing this same thing, and maybe switching to other in case of range problem? For any range it looks like ldr rX, = label form is used, although I'm not sure if it's pseudo instruction or how it works, check some tutorials and disassembly the machine code to see how it was compiled.
It's up to you to learn all the ARM assembly peculiarities and how to load addresses of arrays, I don't need ARM ASM at the moment, so I didn't dig into those details.
And there should be some equ way to define length of array, instead of calculating it in code from end address, but I couldn't find any example, and I'm not going to read full Assembler docs to learn about all it's directives (in gas I think ArrayLength equ ((.-SortArray)/4) would work).

Why is SP (apparently) stored on exception entry on Cortex-M3?

I am using a TI LM3S811 (a older Cortex-M3) with the SysTick interrupt to trigger at 10Hz. This is the body of the ISR:
void SysTick_Handler(void)
{
__asm__ volatile("sub r4, r4, #32\r\n");
}
This produces the following assembly with -O0 and -fomit-frame-pointer with gcc-4.9.3. The STKALIGN bit is 0, so stacks are 4-byte aligned.
00000138 <SysTick_Handler>:
138: 4668 mov r0, sp
13a: f020 0107 bic.w r1, r0, #7
13e: 468d mov sp, r1
140: b401 push {r0}
142: f1ad 0420 sub.w r4, r4, #32
146: f85d 0b04 ldr.w r0, [sp], #4
14a: 4685 mov sp, r0
14c: 4770 bx lr
14e: bf00 nop
I don't understand what's going on with r0 in the listing above. Specifically:
1) It seems like we're clearing the lower 3 bits of SP and storing it on the stack. Is that to maintain 8-byte alignment? Or is it something else?
2) Is the exception exit procedure is equally confusing. From my limited understanding of the ARM assembly, it does something like this:
SP = SP + 4; R0 = SP;
Followed by storing it back to SP. Which seems to undo the manipulations until this stage.
3) Why is there a nop instruction after the unconditional branch (at 0x14E)?
The ARM Procedure Calling Standard and C ABI expect an 8 byte (64 bit) alignment of the stack. As an interrupt might occur after pushing/poping a single word, it is not guaranteed the stack is correctly aligned on interrupt entry.
The STKALIGN bit, if set (the default) enforces the hardware to align the stack automatically by conditionally pushing an extra (dummy) word onto the stack.
The interrupt attribute on a function tells gcc, OTOH the stack might be missaligned, so it adds this pre-/postamble which enforces the alignment.
So, both actually do the same; one in hardware, one in software. If you can live with a word-aligned stack only, you should remove the interrupt attribute from the function declarations and clear the STKALIGN bit.
Make sure such a "missaligned" stack is no problem (I would not expect any, as this is a pure 32 bit CPU). OTOH, you should leave it as-is, unless you really need to safe that extra conditional(!) clock and word (very unlikely).
Warning: According to the ARM Architecture Reference Manual, setting STKALIGN == 0 is deprecated. Briefly: do not set this bit to 0!
Since you're using -O0, you should expect lots of redundant and useless code. The general way in which a compiler works is to generate code with the full generality of everything that might be used anywhere in the program, and then rely on the optimizer to get rid of things that are unneeded.
Yes this is doing 8byte alignment. Its also allocating a stack frame to hold local variables even though you have none.
The exit is the reverse, deallocating the stack frame.
The nop at the end is to maintain 4-byte alignment in the code, as you might want to link with non-thumb code at some point.
If you enable optimization, it will eliminate the stack frame (as its unneeded) and the code will become much simpler.

ARM GCC generated functions prolog

I mentioned that ARM toolchains could generate different function prologs. Actually, i saw two obj files (vmlinux) with completely different function prologs:
The first case looks like:
push {some registers maybe, fp, lr} (lr ommited in leaf function)
The second case looks like:
push {some registers maybe, fp, sp, lr, pc} (i can confuse the order)
So as i see the second one pushes additionally pc and sp. Also i saw some comments in crash utility (kdump project) where was stated, that kernel stackframe should have format {..., fp, sp, lr, pc} what confuse me more, because i see that in some cases it is not true.
1.) Am i right about that some gcc extra flags are needed for pushing additionally pc and sp in function prolog? If yes what are they?.
2.) What is this used for? Basically, as i understand i can unwind stack with FP and LR only, why do i need this additional values?
3.) If this things dealth nothing with compilation flags - how can i force generation of this extended function prolog and again what is the purpose?
Thank you.
1.) Am i right about that some gcc extra flags are needed for pushing additionally pc and sp in function prolog? If yes what are they?.
There are many gcc options that will affect stack frames (-march, -mtune, etc may affect the instructions used for instance). In your case, it was -mapcs-frame. Also, -fomit-frame-pointer will remove frames from leaf functions. Several static functions maybe merged together into a single generated function further reducing the number of frames. The APCS can cause slightly slower code but is needed for stack traces.
2.) What is this used for? Basically, as i understand i can unwind stack with FP and LR only, why do i need this additional values?
All registers that are not parameters (r0-r3) need to be saved as they need to be restored when returning to the caller. The compiler will allocate additional locals on the stack so sp will almost always change when fp changes. For why the pc is stored, see below.
3.) If this things dealth nothing with compilation flags - how can i force generation of this extended function prolog and again what is the purpose?
It is compiler flags as you had guessed.
; Prologue - setup
mov ip, sp ; get a copy of sp.
stm sp!, {fp, ip, lr, pc} ; Save the frame on the stack. See Addendum
sub fp, ip, #4 ; Set the new frame pointer.
...
; Epilogue - return
ldm sp, {fp, sp, lr} ; restore stack, frame pointer and old link.
... ; maybe more stuff here.
bx lr ; return.
A typical save is stm sp!, {fp, ip, lr, pc} and a restore of ldm sp, {fp, sp, lr}. This is correct if you examine the ABI/APCS documents. Note, there is no '!' to try and fix the stack. It is loaded explicitly from the stored ip value.
Also, the saved pc is not used in the epilogue. It is just discarded data on the stack. So why do this? Exception handlers (interrupts, signals or C++ exceptions) and other stack trace mechanisms want to know who saved a frame. The ARM always only have one function prologue (one point of entry). However, there are multiple exits. In some cases, a return like return function(); may actually turn into a b function in the maybe more stuff here. This is known as a tail call. Also when a leaf function is called in the middle of a routine and an exception occurs, it will see a PC range of leaf, but the leaf may have no call frame. By saving the pc, the call frame can be examined when an exception occurs in leaf to know who really saved the stack. Tables of pc versus destructor, etc. maybe stored to allow objects to be freed or to figure out how to call a signal handler. The extra pc is just plain nice when tracing a stack and the operation is almost free due to pipe lining.
See also: ARM Link and frame register question for how the compiler uses these registers.

Resources