I am trying to analyze Linux assembly stack initialization/cleanup during function call/return using this code snippet below. The uninitialized variables are intended.
#define MAX 16
typedef struct _CONTEXT {
int arr[MAX];
int a;
int b;
int c;
};
void init(CONTEXT* ctx)
{
memset(ctx->arr, 0, sizeof(ctx->arr[0]));
ctx->a = 1;
}
void process(CONTEXT* ctx)
{
int trash;
int i;
for (i = 0; i < MAX; i++)
{
trash = ctx->arr[i];
}
}
int main(int argc, char *argv[])
{
CONTEXT ctx;
init(&ctx);
process(&ctx);
return 0;
}
As I learned from the school, and from this lecture slide,
an assembly of a function's stack initialization(style-1) should look like this:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
movq %rdi, -8(%rbp)
...
leave
ret
But when I compile the code snippet above with gcc, the function main and init has the same stack initialization routine style-1 including subq instruction, to allocate stack variables' memory space,
but the function process does not have that kind of stack initialization.
I got this assembly code(style-2):
pushq %rbp
movq %rsp, %rbp
movq %rdi, -24(%rbp)
...
popq %rbp
ret
So the questions are:
What's the policy of compiler decision of creating different function stack initialization during compile time? I didn't put any __cdecl or such in this code, but 2 different stack initialization is found.
How can I find out the allocated stack memory address and size when a style-2 function is initialized?
What's the purpose of movq %rdi, -8(%rbp)?
Are there more stack initialization styles beside style-1 and style-2 in Linux? (without mentioning __cdecl or __stdcall things explicitly)
The way functions are compiled are highly compiler specific and not OS specific. I'm sure that the code generated by GCC under 32-bit Windows is similar to the code generated by GCC under 32-bit Linux while the code generated by Sun's C compiler under 32-bit Linux will look different from the code generated by GCC. This is also the case for stack initialization! Therefore there can be MANY stack initialization styles depending on the compiler used, the compiler settings, the compiler version, internal compiler states and so on.
You are obviously running 64-bit code. Unlike 32-bit Windows (where __cdecl and __stdcall exist) in 64-bit Windows and in Linux there is only one calling convention: In 32-bit Linux this is equal to __cdecl in Windows; 64-bit Linux and 64-bit Windows use two different register-based calling conventions. This means: You cannot change the calling convention for Linux and 64-bit Windows programs because only one is supported.
The purpose of movq %rdi, -8(%rbp) is to store the argument (in the rdi register) on the stack; movq %rdi, -24(%rbp) does the same but it is writing to some area of the stack that may be overwritten by signal handlers - not a good idea! However this is no problem if the value is not read back from the stack!
Obviously the "style-2" function does not require any stack memory.
Related
Assume this C code:
int main(){
return 0;
}
Would look like this in assembly:
main:
pushq %rbp
movq %rsp, %rbp
movl $0, %eax
popq %rbp
ret
I know that Frame pointer fp needs to be saved in the start of functions by pushq %rbp since it needs to be restored when returnning to the caller function.
My question is why do so in main? what's the parent caller of main? Isn't fp pointing to a virtual address, meaning when main terminates the address doesn't mean anything anymore to the next program, correct?
Are fp (or even sp) values persistent between different programs and their address space?
what's the parent caller of main?
In linux main is called by __libc_start_main witch in term is called by _start, in windows I'm not so sure but there is also a _start.
In fact a neat trick is to start a C program without main:
#include <stdio.h>
#include <stdlib.h>
void _start()
{
printf("No main function!\n");
exit(0);
}
compile with:
gcc main.c -nostartfiles
For Windows(10, gcc 8.1.0) and Ubuntu(18.04, gcc 9.2.0)
clang -Wl,-e,-Wl,__start main.c
For MacOS (10.14.6, Xcode 11.3)
Here is an article that talks about Linux x86 Program Start Up
I am currently trying to get some experience with calling assembly functions from C. Therefore, I created a little program which calculates the sum of all array elements.
The C Code looks like this:
#include <stdio.h>
#include <stdint.h>
extern int32_t arrsum(int32_t* arr,int32_t length);
int main()
{
int32_t test[] = {1,2,3};
int32_t length = 3;
int32_t sum = arrsum(test,length);
printf("Sum of arr: %d\n",sum);
return 0;
}
And the assembly function looks like this:
.text
.global arrsum
arrsum:
pushq %rbp
movq %rsp, %rbp
pushq %rdi
pushq %rcx
movq 24(%rbp),%rcx
#movq 16(%rbp),%rdi
xorq %rax,%rax
start_loop:
addl (%rdi),%eax
addq $4,%rdi
loop start_loop
popq %rcx
popq %rdi
movq %rbp , %rsp
popq %rbp
ret
I assumed that C obeys the calling convention and pushes all arguments on the stack. And indeed, at position 24(%rbp) I am able to find the length of the array. I expected to find the pointer to the array at 16(%rbp), but instead I just found 0x0. After some debugging I found that C didn't push the pointer at all but instead moved the whole pointer into the %rdi register.
Why does this happen? I couldn't find any information about this behavior.
The calling convention the C compiler will use depends on your system, metadata you pass to the compiler and flags. It sounds like your compiler is using the System V AMD64 calling convention detailed here: https://en.m.wikipedia.org/wiki/X86_calling_conventions (implying that you're using a Unix-like OS on a 64 bit x86 chip). Basically, in this convention most arguments go into registers because it's faster and the 64 bit x86 systems have enough registers to make this work (usually).
I assumed that C obeys the calling convention and pushes all arguments on the stack.
There is no "the" calling convention. Passing arguments via the stack is only one possible calling convention (of many). This strategy is commonly used on 32-bit systems, but even there, it is not the only way that parameters are passed.
Most 64-bit calling conventions pass the first 4–6 arguments in registers, which is generally more efficient than passing them on the stack.
Exactly which calling convention is at play here is system-dependent; your question doesn't give much of a clue whether you're using Windows or *nix, but I'm guessing that you're using *nix since the parameter is being passed in the rdi register. In that case, the compiler would be following the System V AMD64 ABI.
In the System V AMD64 calling convention, the first six integer-sized arguments (which can also be pointers) are passed in the registers RDI, RSI, RDX, RCX, R8, and R9, in that order. Each register is dedicated to a parameter, thus parameter 1 always goes into RDI, parameter 2 always goes into RSI, and so on. Floating-point parameters are instead passed via the vector registers, XMM0-XMM7. Additional parameters are passed on the stack in reverse order.
More information about this and other common calling conventions is available in the x86 tag wiki.
I need to pass address instead of value of my field from C to assembly function, and I have no idea why I end up with value instead of address.
C code:
long n = 1,ret = 0;
fun(&n, &ret);
//the rest is omitted
Assembly code:
.globl fun
fun:
pushq %rbp
movq %rsp, %rbp
movq 16(%rbp), %rax #my n address
movq 24(%rbp), %rbx #my ret address
cmpq $0, %rax
//the rest is omitted
When I peek values of %rax and %rbx with gdb I can see that I have values in my registers:
Breakpoint 1, fun () at cw.s:6
6 movq 16(%rbp), %rax #my n address
(gdb) s
7 movq 24(%rbp), %rbx #my ret address
(gdb) s
9 cmpq $0, %rax
(gdb) p $rax
$1 = 1
(gdb) p $rbx
$2 = 0
I don't really see whats wrong with my code. I'm sure that &n makes C pass address instead of value. I am following the solution provided here, but with no luck.
Calling a C function in assembly
Update:
I'm running LXLE (it's a fork of Ubuntu) on AMD x86_64. The compiler used is gcc (Ubuntu 4.8.2-19ubuntu1) and GNU assembler (GNU Binutils for Ubuntu) 2.24. My makefile:
cw: cw.c cw.o
gcc cw.o cw.c -o cw
cw.o: cw.s
as -gstabs -o cw.o cw.s
What architecture are you on? What compiler generated the code for fun? Did you write it yourself?
The code is using the r* registers and your question mentions "x64", so I would assume it's some amd64/x86-64/x64 architecture. You're reading things from the stack (which you've commented as "my n/ret address") which I would assume that you expect the function arguments to be there but I'm not aware of any ABI on that CPU family that passes the first arguments to a function on the stack.
If you wrote it yourself, you need to read up on the calling conventions of the ABI your operating system/compiler uses, because unless you're on a very obscure operating system it will not pass (the first few) function arguments on the stack. Most likely you're just reading random values from the stack that just happen to match where your compiler happened to put the values in the calling function.
If you're on Linux or most other unix-like system that use the SysV ABI the first two arguments to a function will be in the rdi, rsi registers. If you're on Windows, that will be rcx, rdx. This is assuming that your arguments are int/long/pointers. If the arguments are structs, floating point or such, other rules apply.
I'm writing a small library intended to be used in place of libc in a small application. I've read the source of the major libc alternatives, but I am unable to get the parameter passing to work for the x86_64 architecture on Linux.
The library does not require any initialization step in between _start and main. Since the libc and its alternatives do use a initialization step, and my assembly knowledge being limited, I suspect the parameter reordering is causing me troubles.
This is what I've got, which contains assembly inspired from various implementations:
.text
.global _start
_start:
/* Mark the outmost frame by clearing the frame pointer. */
xorl %ebp, %ebp
/* Pop the argument count of the stack and place it
* in the first parameter-passing register. */
popq %rdi
/* Place the argument array in the second parameter-passing register. */
movq %rsi, %rsp
/* Align the stack at a 16-byte boundary. */
andq $~15, %rsp
/* Invoke main (defined by the host program). */
call main
/* Request process termination by the kernel. This
* is x86 assembly but it works for now. */
mov %ebx, %eax
mov %eax, 1
int $80
And the entry point is the ordinary main signature: int main(int argc, char* argv[]). Environment variables etc. are not required for this particular project.
The AMD64 ABI says rdi should be used for the first parameter, and rsi for the second.
How do I correctly setup the stack and pass the parameters to main on Linux x86_64? Thanks!
References:
http://www.eglibc.org/cgi-bin/viewvc.cgi/trunk/libc/sysdeps/x86_64/elf/start.S?view=markup
http://git.uclibc.org/uClibc/tree/libc/sysdeps/linux/x86_64/crt1.S
I think you got
/* Place the argument array in the second parameter-passing register. */
movq %rsi, %rsp
wrong. It should be
movq %rsp, %rsi # move argv to rsi, the second parameter in x86_64 abi
main is called by crt0.o; see also this question
The kernel is setting up the initial stack and process environment after execve as specified in the ABI document (architecture specific); the crt0 (and related) code is in charge of calling main.
I've run into some strange behaviour when trying to obtain the current stack pointer in C (using inline ASM). The code looks like:
#include <stdio.h>
class os {
public:
static void* current_stack_pointer();
};
void* os::current_stack_pointer() {
register void *esp __asm__ ("rsp");
return esp;
}
int main() {
printf("%p\n", os::current_stack_pointer());
}
If I compile the code using the standard gcc options:
$ g++ test.cc -o test
It generates the following assembly:
__ZN2os21current_stack_pointerEv:
0000000000000000 pushq %rbp
0000000000000001 movq %rsp,%rbp
0000000000000004 movq %rdi,0xf8(%rbp)
0000000000000008 movq 0xe0(%rbp),%rax
000000000000000c movq %rax,%rsp
000000000000000f movq %rsp,%rax
0000000000000012 movq %rax,0xe8(%rbp)
0000000000000016 movq 0xe8(%rbp),%rax
000000000000001a movq %rax,0xf0(%rbp)
000000000000001e movq 0xf0(%rbp),%rax
0000000000000022 popq %rbp
If I run the resulting binary it crashes with a SIGILL (Illegal Instruction). However if I add a little optimisation to the compile:
$ g++ -O1 test.cc -o test
The generated assembly is much simpler:
0000000000000000 pushq %rbp
0000000000000001 movq %rsp,%rbp
0000000000000004 movq %rsp,%rax
0000000000000007 popq %rbp
0000000000000008 ret
And the code runs fine. So to the question; is there a more stable to get hold of the stack pointer from C code on Mac OS X? The same code has no problems on Linux.
The problem with attempting to fetch the stack pointer through a function call is that the stack pointer inside the called function is pointing at a value that will be completely different after the function returns, and therefore you're capturing the address of a location that will be invalid after the call. You're also making the assumption that there was no function prologue added by the compiler on that platform (i.e., both your functions currently have a prologue where the compiler setups up the current activation record on the stack for the function, which will change the value of RSP that you are attempting to capture). At the very least, provided that there was no function prologue added by the compiler, you will need to subtract the size of a pointer on the platform you're using in order to actually get the "true" address to where the stack will be pointing after the return from the function call. This is because the assembly command call pushes the return address for the instruction pointer onto the stack, and ret in the callee will pop that value off the stack. Thus inside the callee, there will at the very least be a return-address instruction that the stack-pointer will be pointing to, and that location won't be valid after the function call. Finally, on certain platforms (unfortunately not x86), you can use the __attributes__((naked)) tag to create a function with no prologue in gcc. Using the inline keyword to avoid a prologue is not completely reliable since it does not force the compiler to inline the function ... under certain low-optimization levels, inlining will not occur, and you'll end up with a prologue again, and the stack-pointer will not be pointing to the correct location if you decide to take it's address in those cases.
If you must have the value of the stack pointer, then the only reliable method will be to use assembly, follow the rules of your platform's ABI, compile to an object file using an assembler, and then link that object file with the rest of the object files in your executable. You can then expose the assembler function to the rest of your code by including a function declaration in a header file. So your code could look like (assuming you're using gcc to compile your assembly):
//get_stack_pointer.h
extern "C" void* get_stack_ptr();
//get_stack_pointer.S
.section .text
.global get_stack_ptr
get_stack_ptr:
movq %rsp, %rax
addq $8, %rax
ret
Rather than using a register variable with a constraint, you should just write some explicit inline assembler to fetch %esp:
static void *getsp(void)
{
void *sp;
__asm__ __volatile__ ("movq %%rsp,%0"
: "=r" (sp)
: /* No input */);
return sp;
}
You can also convert this to a macro using gcc statement expressions:
#define GETSP() ({void *sp;__asm__ __volatile__("movl %%esp,%0":"=r"(sp):);sp;})
A multi arch version was what I needed recently:
/**
* helps to check the architecture macros:
* `echo | gcc -E -dM - | less`
*
* this is arm, x64 and i386 (linux | apple) compatible
* #return address where the stack starts
*/
void *get_sp(void) {
void *sp;
__asm__ __volatile__(
#ifdef __x86_64__
"movq %%rsp,%0"
#elif __i386__
"movl %%esp,%0"
#elif __arm__
// sp is an alias for r13
"mov %%sp,%0"
#endif
: "=r" (sp)
: /* no input */
);
return sp;
}
I do not have a reference for that, but GCC is known to occasionally (often) misbehave in the presence of inline assembly if compilation is not optimized at all. So you should always add the -O1 flag.
As a side-note, what you are trying to do is not very robust in the presence of an optimizing compiler, because the compiler may inline the call to current_stack_pointer() and the returned value may thus be an approximation of the current stack pointer value (not even a lower bound).