I am having an issue with some inline assembly. I am writing a compiler, and it is compiling to assembly, and for portability i made it add the main function in C and just use inline assembly. Though even the simplest inline assembly is giving me a segfault. Thanks for your help
int main(int argc, char** argv) {
__asm__(
"push $1\n"
);
return 0;
}
TLDR at bottom. Note: everything here is assuming x86_64.
The issue here is that compilers will effectively never use push or pop in a function body (except for prologues/epilogues).
Consider this example.
When the function begins, room is made on the stack in the prologue with:
push rbp
mov rbp, rsp
sub rsp, 32
This creates 32 bytes of room for main. Then notice how throughout the function, instead of pushing items to the stack, they are mov'd to the stack through offsets from rbp:
mov DWORD PTR [rbp-20], edi
mov QWORD PTR [rbp-32], rsi
mov DWORD PTR [rbp-4], 2
mov DWORD PTR [rbp-8], 5
The reason for this is it allows for variables to be stored anywhere at anytime, and loaded from anywhere at anytime without requiring a huge amount of push/pops.
Consider the case where variables are stored using push and pop. Say a variable is stored early on in the function, let's call this foo. 8 variables on the stack later, you need foo, how should you access it?
Well, you can pop everything until foo, and then push everything back, but that's costly.
It also doesn't work when you have conditional statements. Say a variable is only ever stored if foo is some certain value. Now you have a conditional where the stack pointer could be at one of two locations after it!
For this reason, compilers always prefer to use rbp - N to store variables, as at any point in the function, the variable will still live at rbp - N.
NB: On different ABIs (such as i386 system V), parameters to arguments may be passed on the stack, but this isn't too much of an issue, as ABIs will generally specify how this should be handled. Again, using i386 system V as an example, the calling convention for a function will go something like:
push edi ; 2nd argument to the function.
push eax ; 1st argument to the function.
call my_func
; here, it can be assumed that the stack has been corrected
So, why does push actually cause an issue?
Well, I'll add a small asm snippet to the code
At the end of the function, we now have the following:
push 64
mov eax, 0
leave
ret
There's 2 things that fail now due to pushing to the stack.
The first is the leave instruction (see this thread)
The leave instruction will attempt to pop the value of rbp that was stored at the beginning of the function (notice the only push that the compiler generates is at the start: push rbp).
This is so that the stack frame of the caller is preserved following main. By pushing to the stack, in our case rbp is now going to be set to 64, since the last value pushed is 64. When the callee of main resumes it's execution, and tries to access a value at say, rbp - 8, a crash will occur, as rbp - 8 is 0x38 in hex, which is an invalid address.
But that assumes the callee even get's execution back!
After rbp has it's value restored with the invalid value, the next thing on the stack will be the original value of rbp.
The ret instruction will pop a value from the stack, and return to that address...
Notice how this might be slightly problematic?
The CPU is going to try and jump to the value of rbp stored at the start of the function!
On nearly every modern program, the stack is a "no execute" zone (see here), and attempting to execute code from there will immediately cause a crash.
So, TLDR: Pushing to the stack violates assumptions made by the compiler, most importantly about the return address of the function. This violation causes program execution to end up on the stack (generally), which will cause a crash
Related
i have a some simple lines of code in C and wanted to disassemble it:
#include <stdio.h>
int main(){
int i=42;
}
After compiling it and starting gdb, i simply cant find my value=42 in the corresponding place:
Its not just that i get the value 0, but what exactly does
movl $0x2a, -0x4(%rbp)
mean. I know that 0x2a is 42 in hex, but the next part is cryptic to me; should it mean, that 42 gets saved into register rbp ? and what about the -0x4? And where is my 42 :O ?
Each variable in C either gets a set position in memory (called the stack) or a register. In fact, the compiler will often move variables between these two places for performance.
MOVL moves a 32-bit number (your int!) from one register to another, while MOV moves an entire register, even if your program doesn't use that part of the register.
PUSH and POP add and remove items from the stack. It's often used by the C compiler to save registers. As a function calling main(), you have no idea what it does, and how to clean up after all the memory main() uses, which is why it is main()'s responsibility to clean up after itself, leaving the program exactly as it started with it. (except, of course, for the results of the operation)
EAX is a common register, and is typically used for the results of functions.
With this background, let's rewrite your program in a slightly more readable form:
push %rbp Move the stack pointer to the stack itself (so we can clean up after all of our junk memory)
mov %rsp, %rbp Resize the stack to 0, Preventing the main() from accidentally reading junk from other functions
movl $0x2a, -0x4(%rbp) Move 42 to the first slot in the stack (note: Since an int is 4 bytes big, this is actually the -4th space!)
Wait-- We never actually resized the stack to tell other programs that our special number is there. This is because the compiler optimized it away, as it saw that we were increasing the size when we were going to reset it in a few instructions anyways.
Thank you Peter Cordes, for reminding me the Red Zone exists! This is a special area of memory inside of the stack which does not require the stack to be expanded before writing to it. In the case of this program, this is likely what is happening.
MOV $0x0, %eax Move 0 to the result register (EAX)
POP %RBP Clean Up after our mess by restoring the old stack pointer. This means that, even though we reset the size of the stack, the program above us will still have all of their memory intact. Remember--We did this to prevent main() from accessing other function's memory. Great!
RETQ Return and say goodbye ;(
If you wanted to retrieve your 42, you would need to change your code to say return 42, which would means the compiler would place 42 in EAX, and it would get passed up to your friends above :)
So I have this coroutine API, extended by me, based on code I found here: https://the8bitpimp.wordpress.com/2014/10/21/coroutines-x64-and-visual-studio/
struct mcontext {
U64 regs[8];
U64 stack_pointer;
U64 return_address;
U64 coroutine_return_address;
};
struct costate {
struct mcontext callee;
struct mcontext caller;
U32 state;
};
void coprepare(struct costate **token,
void *stack, U64 stack_size, cofunc_t func); /* C code */
void coenter(struct costate *token, void *arg); /* ASM code */
void coyield(struct costate *token); /* ASM code */
int coresume(struct costate *token); /* ASM code, new */
I'm stuck on implementing coyield(). coyield() can be written in C, but it's the assembly that I'm having problems with. Here's what I got so far (MASM/VC++ syntax).
;;; function: void _yield(struct mcontext *callee, struct mcontext *caller)
;;; arg0(RCX): callee token
;;; arg2(RDX): caller token
_yield proc
lea RBP, [RCX + 64 * 8]
mov [RCX + 0], R15
mov [RCX + 8], R14
mov [RCX + 16], R13
mov [RCX + 24], R12
mov [RCX + 32], RSI
mov [RCX + 40], RDI
mov [RCX + 48], RBP
mov [RCX + 56], RBX
mov R11, RSP
mov RSP, [RDX + 64]
mov [RDX + 64], R11
mov R15, [RDX + 0]
mov R14, [RDX + 8]
mov R13, [RDX + 16]
mov R12, [RDX + 24]
mov RSI, [RDX + 32]
mov RDI, [RDX + 40]
mov RBP, [RDX + 48]
mov RBX, [RDX + 56]
ret
_yield endp
This is a straight forward adaption of 8bitpimp's code. What it doesn't do, if I understand this code correctly, is put mcontext->return_address and mcontext->coroutine_return_address on the stack to be popped by the ret. Also, is that fast? IIRC, it causes a mismatch on the return branch predictor found in modern x64 pieces.
This answers only addresses the "is it fast" part of the question.
Return Address Prediction
First, a brief description of the behavior of a typical return-address predictor.
Every time a call is made, the return address that is pushed on the actual stack is also stored inside a CPU structure called the return address buffer or something like that.
When a ret (return) is made, the CPU assumes the destination will be the address currently at the top of the return address buffer, and that entry from return address buffer is "popped".
The effect is to perfectly1 predict call/ret pairs, as long as they occur in their usual properly nested pattern and that ret is actually removing the unmodified return address pushed by call in each case. For more details you can start here.
Normal function calls in C or C++ (or pretty much any other language) will generally always follow this properly nested pattern2. So you don't need to do anything special to take advantage of the return prediction.
Failure Modes
In cases where call/ret aren't paired up normally, the predictions can fail in (at least) a couple of different ways:
If the stack pointer or the return value on the stack is manipulated so that a ret doesn't return the place that the corresponding call pushed, you'll get a branch target prediction failure for that ret, but subsequent normally nested ret instructions will continue to predict correctly as long as they are correctly nested. For example, if at function you add a few bytes to the value at [rsp] in order to skip over the instruction following the call in the calling function, the next ret will mispredict, but the ret that follows inside the calling function should be fine.
On the other hand, the call and ret functions aren't properly nested, the whole return prediction buffer can become misaligned, causing future ret instructions, if any, that use the existing values to mispredict2.5. For example, if you call into a function, but then use jmp to return to the caller, there is a mismatched call without a ret. The ret inside the caller will mispredict, and so will the ret inside the caller of the caller, and so on, until all misaligned values are used up or overwritten3. A similar case would occur if you had a ret not matched with a corresponding call (and this case is important for the subsequent analysis).
Rather than the two rules above , you can also simply determine the behavior of the return predictor by tracing through the code and tracking what the return stack looks like at each point. Every time you have a ret instruction, see if it returns to the current top of the return stack - if not, you'll get a misprediction.
Misprediction Cost
The actual cost of a misprediction depends on the surrounding code. A figure of ~20 cycles is commonly given and is often seen in practice, but the actual cost can be lower: e.g., as low as zero if the CPU is able to resolve the misprediction early and and start fetching along the new path without interrupting the critical path, or higher: e.g., if the branch prediction failures take a long time to resolve and reduce the effective parallelism of long-latency operations. Regardless we can say that the penalty is usually significant when it occurs in an operation that other takes only a handful of instructions.
Fast Coroutines
Existing Behavior for Coresume and Coyield
The existing _yield (context switch) function swaps the stack pointer rsp and then uses ret to return to a different location than what the actually caller pushed (in particular, it returns to location that was pushed onto the caller stack when the caller called yield earlier). This will generally cause a misprediction at the ret inside _yield.
For example, consider the case where some function A0 makes a normal function call to A1, which it turn calls coresume4 to resume a coroutine B1, which later calls coyield to yield back to A1. Inside the call to coresume, the return stack looks like A0, A1, but then coresume swaps rsp to point to the stack for B1 and the top value of that stack is an address inside B1 immediately following coyield in the code for B1. The ret inside coresume hence jumps to a point in B1, and not to a point in A1 as the return stack expects. Hence you get a mis-prediction on that ret and the return stack looks like A0.
Now consider what happens when B1 calls coyield, which is implemented in basically the same way coresume: the call to coyield pushes B1 on the return stack which now looks like A0, B1 and then swaps the stack to point to A1 stack and then does the ret which will return to A1. So the ret mispredicition will happen in the same way, and the stack is left as A0.
So the bad news is that a tight series of calls to coresume and coyield (as is typical with a yield-based iterator, for example), will mispredict each time. The good news is that now inside A1 at least the return stack is correct (not misaligned) - if A1 returns to its caller A0, the return is correctly predicted (and so on when A0 returns to its caller, etc). So you suffer a mispredict penalty each time, but at least you don't misalign the return stack in this scenario. The relative importance of this depends on how often you are calling coresume/coyield versus calling functions normally in the below the function that is calling coresume.
Making It Fast
So can we fix the misprediction? Unfortunately, it's tricky in the combination of C and external ASM calls, because making the call to coresume or coyield implies a call inserted by the compiler, and it's hard to unwind this in the asm.
Still, let's try.
Use Indirect Calls
One approach is get of using ret at all and just use indirect jumps.
That is, just replace the ret at the end of your coresume and coyield calls with:
pop r11
jmp r11
This is functionally equivalent to ret, but affects the return stack buffer differently (in particular, it doesn't affect it).
If analyze the repeated sequence of coresume and coyield calls as above, we get the result that the return stack buffer just starts growing indefinitely like A0, A1, B1, A1, B1, .... This occurs because in fact we aren't using the ret at all in this implementation. So we don't suffer return mis-predictions, because we aren't using ret! Instead, we rely on the accuracy of the indirect branch predictor to predict jmp11.
How that predictor works depends on how coresume and coyeild are implemented. If they both call a shared _yield function that isn't inlined there is only a single jmp r11 location and this jmp will alternately go to a location in A1 and B1. Most modern indirect predictors will repredict this simple repeating pattern fine, although older ones which only tracked a single location will not. If _yield got inlined into coresume and coyield or you just copy-pasted the code into each function, there are two distinct jmp r11 call sites, each which only see a single location each, and should be well-predicted by any CPU with an indirect branch predictor6.
So this should generally predict a series of tight coyield and coresume calls well7, but at the cost of obliterating the return buffer, so when A1 decides to return to A0 this will be mispredicted as well as subsequent returns by A0 and so on. The size of this penalty is bounded above by the size of the return stack buffer, so if you are making many tight coresume/yield calls this may be a good tradeoff.
That's the best I can think of within the constraint of external calls to functions written in ASM, because you already have an implied call for your co routines, and you have to make the jump to the other couroutine from inside there and I can't see how to keep the stacks balanced and return to the correct location with those constraints.
Inlined Code at the Call Site
If you can inline code at the call-site of your couroutine methods (e.g., with compiler support or inline asm), then you can perhaps do better.
The call to coresume could be inlined as something like this (I've omitted the register saving and restoring code because that's straightforward):
; rcx - current context
; rdc - context for coroutine we are about to resume
; save current non-volatile regs (not shown)
; load non-volatile regs for dest (not shown)
lea r11, [rsp - 8]
mov [rcx + 64], r11 ; save current stack pointer
mov r11, [rdx + 64] ; load dest stack pointer
call [r11]
Note that coresume doens't actually do the stack swap - it just loads the destination stack into r11 and then does a call against [r11] to jump to the coroutine. This is necessary so that that call correctly pushes location we should return to on the stack of the caller.
Then, coyield would look something like (inlined into the calling function):
; save current non-volatile regs (not shown)
; load non-volatile regs for dest (not shown)
lea r11, [after_ret]
push r11 ; save the return point on the stack
mov rsp, [rdx + 64] ; load the destination stack
ret
after_ret:
mov rsp, r11
When a coresume call jumps to the coroutine it ends up at after_ret, and before executing the user code the mov rsp, r11 instruction swaps to the proper stack for the coroutine which has been stashed in r11 by coresume.
So essentially coyield has two parts: the top half executed before the yield (which occurs at the ret call) and the bottom half which completes the work started by coresume. This allows you to use call as the mechanism to do the coresume jump and ret to do the coyield jump. The call/ret are balanced in this case.
I've glossed over some details of this approach: for example, since there is no function call involved, the ABI-specified non-volatile registers aren't really special: in the case of inline assembly you'll need to indicate to the compiler which variables you will clobber and save the rest, but you can choose whatever set is convenient for you. Choosing a larger set of clobbered variables makes the coresume/coyield code sequences themselves shorter, but potentially puts more register pressure on the surrounding code and may force the compiler to spill more surrounding you code. Perhaps the ideal is just to declare everything clobbered and then the compiler will just spill what it needs.
1 Of course, there are limitations in practice: the size of the return stack buffer is likely limited to some small number (e.g., 16 or 24) so once the depth of the call stack exceeds that, some return addresses are lost and won't be correctly predicted. Also, various events like a context switch or interrupt are likely to mess up the return-stack predictor.
2 An interesting exception was a common pattern for reading the current instruction pointer in x86 (32-bit) code: there is no instruction to do this directly, so instead a call next; next: pop rax sequence can be used: a call to the next instruction which serves only the push the address on the stack which is popped off. There is no corresponding ret. Current CPUs actually recognize this pattern however and don't unbalance the return-address predictor in this special case.
2.5 How many mispredictions this implies depends on how may net returns the calling function does: if it immediately starts calling down another deep chain of calls, the misaligned return stack entries may never be used at all, for example.
3 Or, perhaps, until the return address stack is re-aligned by a ret without a corresponding call, a case of "two wrongs make a right".
4 You haven't actually shown how coyield and coresume actually call _yield, so for the rest of the question I'll assume that they are implemented essentially as _yield is, directly within coyield or coresume without calling _yield: i.e., copy and paste the _yield code into each function, possible with some small edits to account for the difference. You can also make this work by calling _yield, but then you have an additional layer of calls and rets that complicates the analysis.
5 To the extent these terms even make sense in a symmetric couroutine implementation, since there is in fact no absolute notion of caller and callee in that case.
6 Of course, this analysis applies only to the simple case that you have a single coresume call calling into a coroutine with a single coyield call. More complex scenarios are possible, such as multiple coyield calls inside the callee, or multiple coresume calls inside the caller (possibly to different couroutines). However, the same pattern applies: the case with split jmp r11 sites will present a simpler steam than the combined case (possibly at the cost of more iBTB resources).
7 One exception would be the first call or two: the ret predictor needs no "warmup" but the indirect branch predictor may, especially when another coroutine has been called in the interim.
My question is related to Stack allocation, padding, and alignment. Consider the following function:
void func(int a,int b)
{
char buffer[5];
}
At assembly level, the function looks like this:
pushl %ebp
movl %esp, %ebp
subl $24, %esp
I want to know how the 24 bytes on the stack is allocated. I understand that 16 bytes is allocated for the char buffer[5]. I don't understand why the extra 8 bytes are for and how they are allocated. The top answer in the above link says that it is for ret and leave. Can someone please expand on that?
I'm thinking that the stack structure looks like this:
[bottom] b , a , return address , frame pointer , buffer1 [top]
But this could be wrong because i'm writing a simple buffer overflow and trying to change the return address. But for some reason the return address is not changing. Is something else present on the stack?
There are a couple of reasons for extra space. One is for alignment of variables. A second is to introduce padding for checking the stack (typically a debug build rather than a release build use of space). A third is to have additional space for temporary storage of registers or compiler generated temporary variables.
In the C calling sequence, the way it is normally done is there will be a series of push instructions pushing the arguments onto the stack and then a call instruction is used to call the function. The call instruction will push the return address onto the stack.
When the function returns, the calling function will then remove the pushed on arguments. For instance the call to a function (this is Visual Studio 2005 with a C++ program) will look like:
push OFFSET ?pHead##3VPerson##A ; pHead
call ?exterminateStartingFrom##YAXPAVPerson###Z ; exterminateStartingFrom
add esp, 4
This is pushing the address of a variable onto the stack, calling the function (the function name is mangled per C++), and then after the called function returns, it readjusts the stack by adding to the stack pointer the number of bytes used for the address.
The following is the entry part of the called function. What this does is to allocate space on the stack for the local variables. Notice that after setting up the entry environment, it then gets the function argument from the stack.
push ebp
mov ebp, esp
sub esp, 232 ; 000000e8H
push ebx
push esi
push edi
lea edi, DWORD PTR [ebp-232]
When the function returns, it basically adjusts the stack back to where it was at the time the function was called. Each function is responsible for cleaning up whatever changes it has made to the stack before it returns.
pop edi
pop esi
pop ebx
add esp, 232 ; 000000e8H
pop ebp
ret 0
You mention that you are trying to change the return address. From these examples what you can see is that the return address is after the last argument that was pushed onto the stack.
Here is a brief writeup on function call conventions. Also take a look at this document on Intel assembler instructions.
Doing some example work with Visual Studio 2005 what I see is that if I do the following code, I can access the return for this example function.
void MyFunct (unsigned short arg) {
unsigned char *retAddress = (unsigned char *)&arg;
retAddress -=4;
printf ("Return address is 0x%2.2x%2.2x%2.2x%2.2x\n", retAddress[3], retAddress[2], retAddress[1], retAddress[0]);
}
Notice that the call assembler instruction for this Windows 32 bit addressing appears to put the return address in a byte order in which the return address is stored from low byte to high byte.
The extra space is for the stack alignment, which is usually done for better performance.
Whenever I read about program execution in C, it speaks very less about the function execution. I am still trying to find out what happens to a function when the program starts executing it from the time it is been called from another function to the time it returns? How do the function arguments get stored in memory?
That's unspecified; it's up to the implementation. As pointed out by Keith Thompson, it doesn't even have to tell you how it works. :)
Some implementations will put all the arguments on the stack, some will use registers, and many use a mix (the first n arguments passed in registers, any more and they go on the stack).
But the function itself is just code, it's read-only and nothing much "happens" to it during execution.
There is no one correct answer to this question, it depends heavily upon how the compiler writer determines is the best model to do this. There are various bits in the standard that describes this process but most of it is implementation defined. Also, the process is dependent on the architecture of the system, the OS you're aiming for, the level of optimisation and so forth.
Take the following code:-
int DoProduct (int a, int b, int c)
{
return a * b * c;
}
int result = DoProduct (4, 5, 6);
The MSVC2005 compiler, using standard debug build options created this for the last line of the above code:-
push 6
push 5
push 4
call DoProduct (411186h)
add esp,0Ch
mov dword ptr [ebp-18h],eax
Here, the arguments are pushed onto the stack, starting with the last argument, then the penultimate argument and so on until the the first argument is pushed onto the stack. The function is called, then the arguments are removed from the stack (the add esp,0ch) and then the return value is saved - the result is stored in the eax register.
Here's the code for the function:-
push ebp
mov ebp,esp
sub esp,0C0h
push ebx
push esi
push edi
lea edi,[ebp-0C0h]
mov ecx,30h
mov eax,0CCCCCCCCh
rep stos dword ptr es:[edi]
mov eax,dword ptr [a]
imul eax,dword ptr [b]
imul eax,dword ptr [c]
pop edi
pop esi
pop ebx
mov esp,ebp
pop ebp
ret
The first thing the function does is to create a local stack frame. This involves creating a space on the stack to store local and temporary variables in. In this case, 192 (0xc0) bytes are reserved (the first three instructions). The reason it's so many is to allow the edit-and-continue feature some space to put new variables into.
The next three instructions save the reserved registers as defined by the MS compiler. Then the stack frame space just created is initialised to contain a special debug signature, in this case 0xCC. This means unitialised memory and if you ever see a value consisting of just 0xCC's in debug mode then you've forgotten to initialise the value (unless 0xCC was the value).
Once all that housekeeping has been done, the next three instructions implement the body of the function, the two multiplies. After that, the reserved registers are restored and then the stack frame destroyed and finally the function ends with a ret. Fortunately, the imul puts the result of the multiplication into the eax register so there's no special code to get the result into the right register.
Now, you've probably been thinking that there's a lot there that isn't really necessary. And you're right, but debug is about getting the code right and a lot of the above helps to achieve that. In release, there's a lot that can be got rid of. There's no need for a stack frame, no need, therefore, to initialise it. There's no need to save the reserved registers as they aren't modified. In fact, the compiler creates this:-
mov eax,dword ptr [esp+4]
imul eax,dword ptr [esp+8]
imul eax,dword ptr [esp+0Ch]
ret
which, if I'd let the compiler do it, would have been in-lined into the caller.
There's a lot more stuff that can happen: values passed in registers and so on. Also, I've not got into how floating point values and structures / classes as passed to and from functions. And there's more that I've probably left out.
I am learning assembly using GDB & Eclipse
Here is a simple C code.
int absdiff(int x, int y)
{
if(x < y)
return y-x;
else
return x-y;
}
int main(void) {
int x = 10;
int y = 15;
absdiff(x,y);
return EXIT_SUCCESS;
}
Here is corresponding assembly instructions for main()
main:
080483bb: push %ebp #push old frame pointer onto the stack
080483bc: mov %esp,%ebp #move the frame pointer down, to the position of stack pointer
080483be: sub $0x18,%esp # ???
25 int x = 10;
080483c1: movl $0xa,-0x4(%ebp) #move the "x(10)" to 4 address below frame pointer (why not push?)
26 int y = 15;
080483c8: movl $0xf,-0x8(%ebp) #move the "y(15)" to 8 address below frame pointer (why not push?)
28 absdiff(x,y);
080483cf: mov -0x8(%ebp),%eax # -0x8(%ebp) == 15 = y, and move it into %eax
080483d2: mov %eax,0x4(%esp) # from this point on, I am confused
080483d6: mov -0x4(%ebp),%eax
080483d9: mov %eax,(%esp)
080483dc: call 0x8048394 <absdiff>
31 return EXIT_SUCCESS;
080483e1: mov $0x0,%eax
32 }
Basically, I am asking to help me to make sense of this assembly code, and why it is doing things in this particular order. Point where I am stuck, is shown in assembly comments. Thanks !
Lines 0x080483cf to 0x080483d9 are copying x and y from the current frame on the stack, and pushing them back onto the stack as arguments for absdiff() (this is typical; see e.g. http://en.wikipedia.org/wiki/X86_calling_conventions#cdecl). If you look at the disassembler for absdiff() (starting at 0x8048394), I bet you'll see it pick these values up from the stack and use them.
This might seem like a waste of cycles in this instance, but that's probably because you've compiled without optimisation, so the compiler does literally what you asked for. If you use e.g. -O2, you'll probably see most of this code disappear.
First it bears saying that this assembly is in the AT&T syntax version of x86_32, and that the order of arguments to operations is reversed from the Intel syntax (used with MASM, YASM, and many other assemblers and debuggers).
080483bb: push %ebp #push old frame pointer onto the stack
080483bc: mov %esp,%ebp #move the frame pointer down, to the position of stack pointer
080483be: sub $0x18,%esp # ???
This enters a stack frame. A frame is an area of memory between the stack pointer (esp) and the base pointer (ebp). This area is intended to be used for local variables that have to live on the stack. NOTE: Stack frames don't have to be implemented in this way, and GCC has the optimization switch -fomit-frame-pointer that does away with it except when alloca or variable sized arrays are used, because they are implemented by changing the stack pointer by arbitrary values. Not using ebp as the frame pointer allows it to be used as an extra general purpose register (more general purpose registers is usually good).
Using the base pointer makes several things simpler to calculate for compilers and debuggers, since where variables are located relative to the base does not change while in the function, but you can also index them relative to the stack pointer and get the same results, though the stack pointer does tend to change around so the same location may require a different index at different times.
In this code 0x18 (or 24) bytes are being reserved on the stack for local use.
This code so far is often called the function prologue (not to be confused with the programming language "prolog").
25 int x = 10;
080483c1: movl $0xa,-0x4(%ebp) #move the "x(10)" to 4 address below frame pointer (why not push?)
This line moves the constant 10 (0xA) to a location within the current stack frame relative to the base pointer. Because the base pointer below the top of the stack and since the stack grows downward in RAM the index is negative rather than positive. If this were indexed relative to the stack pointer a different index would be used, but it would be positive.
You are correct that this value could have been pushed rather than copied like this. I suspect that this is done this way because you have not compiled with optimizations turned on. By default gcc (which I assume you are using based on your use of gdb) does not optimize much, and so this code is probably the default "copy a constant to a location in the stack frame" code. This may not be the case, but it is one possible explanation.
26 int y = 15;
080483c8: movl $0xf,-0x8(%ebp) #move the "y(15)" to 8 address below frame pointer (why not push?)
Similar to the previous line of code. These two lines of code put the 10 and 15 into local variables. They are on the stack (rather than in registers) because this is unoptimized code.
28 absdiff(x,y);
gdb printing this meant that this is the source code line being executed, not that this function is being executed (yet).
080483cf: mov -0x8(%ebp),%eax # -0x8(%ebp) == 15 = y, and move it into %eax
In preparation for calling the function the values that are being passed as arguments need to be retrieved from their storage locations (even though they were just placed at those locations and their values are known because of the no optimization thing)
080483d2: mov %eax,0x4(%esp) # from this point on, I am confused
This is the second part of the move to the stack of one of the local variables' value so that it can be use as an argument to the function. You can't (usually) move from one memory address to another on x86, so you have to move it through a register (eax in this case).
080483d6: mov -0x4(%ebp),%eax
080483d9: mov %eax,(%esp)
These two lines do the same thing except for the other variable. Note that since this variable is being moved to the top of the stack that no offset is being used in the second instruction.
080483dc: call 0x8048394 <absdiff>
This pushed the return address to the top of the stack and jumps to the address of absdiff.
You didn't include code for absdiff, so you probably did not step through that.
31 return EXIT_SUCCESS;
080483e1: mov $0x0,%eax
C programs return 0 upon success, so EXIT_SUCCESS was defined as 0 by someone. Integer return values are put in eax, and some code that called the main function will use that value as the argument when calling the exit function.
32 }
This is the end. The reason that gdb stopped here is that there are things that actually happen to clean up. In C++ it is common to see destructor for local class instances being called here, but in C you will probably just see the function epilogue. This is the compliment to the function prologue, and consists of returning the stack pointer and base pointer to the values that they were originally at. Sometimes this is done with similar math on them, but sometimes it is done with the leave instruction. There is also an enter instruction which can be used for the prologue, but gcc doesn't do this (I don't know why). If you had continued to view the disassembly here you would have seen the epilogue code and a ret instruction.
Something you may be interested in is the ability to tell gcc to produce assembly files. If you do:
gcc -S source_file.c
a file named source_file.s will be produced with assembly code in it.
If you do:
gcc -S -O source_file.c
Then the same thing will happen, but some basic optimizations will be done. This will probably make reading the assembly code easier since the code will not likely have as many odd instructions that seem like they could have been done a better way (like moving constant values to the stack, then to a register, then to another location on the stack and never using the push instruction).
You regular optimization flags for gcc are:
-O0 default -- none
-O1 a few optimizations
-O the same as -O1
-O2 a lot of optimizations
-O3 a bunch more, some of which may take a long time and/or make the code a lot bigger
-Os optimize for size -- similar to -O2, but not quite
If you are actually trying to debug C programs then you will probably want the least optimizations possible since things will happen in the order that they are written in your code and variables won't disappear.
You should have a look at the gcc man page:
man gcc
Remember, if you're running in a debugger or debug mode, the compiler reserves the right to insert whatever debugging code it likes and make other nonsensical code changes.
For example, this is Visual Studio's debug main():
int main(void) {
001F13D0 push ebp
001F13D1 mov ebp,esp
001F13D3 sub esp,0D8h
001F13D9 push ebx
001F13DA push esi
001F13DB push edi
001F13DC lea edi,[ebp-0D8h]
001F13E2 mov ecx,36h
001F13E7 mov eax,0CCCCCCCCh
001F13EC rep stos dword ptr es:[edi]
int x = 10;
001F13EE mov dword ptr [x],0Ah
int y = 15;
001F13F5 mov dword ptr [y],0Fh
absdiff(x,y);
001F13FC mov eax,dword ptr [y]
001F13FF push eax
001F1400 mov ecx,dword ptr [x]
001F1403 push ecx
001F1404 call absdiff (1F10A0h)
001F1409 add esp,8
*(int*)nullptr = 5;
001F140C mov dword ptr ds:[0],5
return 0;
001F1416 xor eax,eax
}
001F1418 pop edi
001F1419 pop esi
001F141A pop ebx
001F141B add esp,0D8h
001F1421 cmp ebp,esp
001F1423 call #ILT+300(__RTC_CheckEsp) (1F1131h)
001F1428 mov esp,ebp
001F142A pop ebp
001F142B ret
It helpfully posts the C++ source next to the corresponding assembly. In this case, you can fairly clearly see that x and y are stored on the stack explicitly, and an explicit copy is pushed on, then absdiff is called. I explicitly de-referenced nullptr to cause the debugger to break in. You may wish to change compiler.
Compile with -fverbose-asm -g -save-temps for additional information with GCC.