what's this between local var and EBP on the stack? - c

For this simple code:
void main(){
char buf[12];
buf[11]='\xff';
puts(buf);
}
I use gdb to debug this code and get its stack info like this:
0xbffff480: 0x40158ff4 0x40158ff4 0xff0494dc 0x40158ff4
0xbffff490: 0x00000000 0x40016ca0 0xbffff4f8 0x40045de3
0xbffff480 is where "buf" starts, and the last two words are EBP and RET, but what the hell is that between buf and EBP? obviously I don't have any other local vars. And I also tried if I allocate 8 bytes for buf on stack, it just continues from EBP, but if I allocate 9 or more bytes, it always has something in between. Could somebody explain this to me please? Thanks a lot! I am on linux 2.6.9
disassembly for main:
0x080483c4 <main+0>: push %ebp
0x080483c5 <main+1>: mov %esp,%ebp
0x080483c7 <main+3>: sub $0x28,%esp
0x080483ca <main+6>: and $0xfffffff0,%esp
0x080483cd <main+9>: mov $0x0,%eax
0x080483d2 <main+14>: add $0xf,%eax
0x080483d5 <main+17>: add $0xf,%eax
0x080483d8 <main+20>: shr $0x4,%eax
0x080483db <main+23>: shl $0x4,%eax
0x080483de <main+26>: sub %eax,%esp
0x080483e0 <main+28>: movb $0xff,0xfffffff3(%ebp)
0x080483e4 <main+32>: lea 0xffffffe8(%ebp),%eax
0x080483e7 <main+35>: mov %eax,(%esp)
0x080483ea <main+38>: call 0x80482e4
0x080483ef <main+43>: leave
0x080483f0 <main+44>: ret

That would be padding. Your compiler is probably aligning EBP on a 8-byte boundary, because aligned memory is almost always easier and faster to work with (from the processor's point of view). Some data types even require proper alignment to work.
You don't see any padding when you allocate only 8 bytes in your buffer because, in that case, EBP is already properly aligned.

Normally gcc keep the stack aligned to a multiple of 16 for the sake of being able to use SSE instructions. Reading a disassembly of your main would be instructive.

GCC is known to be somewhat trigger happy with stack usage. It often reserves a bit more stack space than strictly needed. Also, it will try to achieve 16-byte stack alignment even when this is not necessary. This seems to be what happens with the first instructions (40 bytes reserved, then %esp alignment to a multiple of 16).
The code you show, however, contains come strange things, especially the sequence from offsets 9 to 27: this is a long, slow, convoluted way of subtracting 16 from %esp, something which could have been done in a single opcode. Subtracting some bytes from %esp at that point is logical in preparation for calling an external function (puts()), and the count (16) respects alignment, but why doing so in such a weird way ?
It might be possible that this sequence is meant to be patched up in some way (e.g. at link time) to support either stack smashing detection code, or some sort of profiling code. I cannot reproduce this on my own systems. You should specify the version of gcc and libc you are using, the exact compilation flags, and the Linux distribution (because distributors may activate some options by default). The "2.6.9" figure is the kernel version, and it has no bearing whatsoever on the problem at hand (it just tells us that the system is quite old).

You should really include an aseembly dump rather than a pure hex dump. but its more than likely one of 2 things:
A stack frame being restored
A stack check to ensure there was no corupption
the start may also contain a stack alignment, forcing one or both of the above

Related

CSAPP: Why use subq followed by movq when we have pushq already [duplicate]

I belive push/pop instructions will result in a more compact code, maybe will even run slightly faster. This requires disabling stack frames as well though.
To check this, I will need to either rewrite a large enough program in assembly by hand (to compare them), or to install and study a few other compilers (to see if they have an option for this, and to compare the results).
Here is the forum topic about this and simular problems.
In short, I want to understand which code is better. Code like this:
sub esp, c
mov [esp+8],eax
mov [esp+4],ecx
mov [esp],edx
...
add esp, c
or code like this:
push eax
push ecx
push edx
...
add esp, c
What compiler can produce the second kind of code? They usually produce some variation of the first one.
You're right, push is a minor missed-optimization with all 4 major x86 compilers. There's some code-size, and thus indirectly performance to be had. Or maybe more directly a small amount of performance in some cases, e.g. saving a sub rsp instruction.
But if you're not careful, you can make things slower with extra stack-sync uops by mixing push with [rsp+x] addressing modes. pop doesn't sound useful, just push. As the forum thread you linked suggests, you only use this for the initial store of locals; later reloads and stores should use normal addressing modes like [rsp+8]. We're not talking about trying to avoid mov loads/stores entirely, and we still want random access to the stack slots where we spilled local variables from registers!
Modern code generators avoid using PUSH. It is inefficient on today's processors because it modifies the stack pointer, that gums-up a super-scalar core. (Hans Passant)
This was true 15 years ago, but compilers are once again using push when optimizing for speed, not just code-size. Compilers already use push/pop for saving/restoring call-preserved registers they want to use, like rbx, and for pushing stack args (mostly in 32-bit mode; in 64-bit mode most args fit in registers). Both of these things could be done with mov, but compilers use push because it's more efficient than sub rsp,8 / mov [rsp], rbx. gcc has tuning options to avoid push/pop for these cases, enabled for -mtune=pentium3 and -mtune=pentium, and similar old CPUs, but not for modern CPUs.
Intel since Pentium-M and AMD since Bulldozer(?) have a "stack engine" that tracks the changes to RSP with zero latency and no ALU uops, for PUSH/POP/CALL/RET. Lots of real code was still using push/pop, so CPU designers added hardware to make it efficient. Now we can use them (carefully!) when tuning for performance. See Agner Fog's microarchitecture guide and instruction tables, and his asm optimization manual. They're excellent. (And other links in the x86 tag wiki.)
It's not perfect; reading RSP directly (when the offset from the value in the out-of-order core is nonzero) does cause a stack-sync uop to be inserted on Intel CPUs. e.g. push rax / mov [rsp-8], rdi is 3 total fused-domain uops: 2 stores and one stack-sync.
On function entry, the "stack engine" is already in a non-zero-offset state (from the call in the parent), so using some push instructions before the first direct reference to RSP costs no extra uops at all. (Unless we were tailcalled from another function with jmp, and that function didn't pop anything right before jmp.)
It's kind of funny that compilers have been using dummy push/pop instructions just to adjust the stack by 8 bytes for a while now, because it's so cheap and compact (if you're doing it once, not 10 times to allocate 80 bytes), but aren't taking advantage of it to store useful data. The stack is almost always hot in cache, and modern CPUs have very excellent store / load bandwidth to L1d.
int extfunc(int *,int *);
void foo() {
int a=1, b=2;
extfunc(&a, &b);
}
compiles with clang6.0 -O3 -march=haswell on the Godbolt compiler explorer See that link for all the rest of the code, and many different missed-optimizations and silly code-gen (see my comments in the C source pointing out some of them):
# compiled for the x86-64 System V calling convention:
# integer args in rdi, rsi (,rdx, rcx, r8, r9)
push rax # clang / ICC ALREADY use push instead of sub rsp,8
lea rdi, [rsp + 4]
mov dword ptr [rdi], 1 # 6 bytes: opcode + modrm + imm32
mov rsi, rsp # special case for lea rsi, [rsp + 0]
mov dword ptr [rsi], 2
call extfunc(int*, int*)
pop rax # and POP instead of add rsp,8
ret
And very similar code with gcc, ICC, and MSVC, sometimes with the instructions in a different order, or gcc reserving an extra 16B of stack space for no reason. (MSVC reserves more space because it's targeting the Windows x64 calling convention which reserves shadow space instead of having a red-zone).
clang saves code-size by using the LEA results for store addresses instead of repeating RSP-relative addresses (SIB+disp8). ICC and clang put the variables at the bottom of the space it reserved, so one of the addressing modes avoids a disp8. (With 3 variables, reserving 24 bytes instead of 8 was necessary, and clang didn't take advantage then.) gcc and MSVC miss this optimization.
But anyway, more optimal would be:
push 2 # only 2 bytes
lea rdi, [rsp + 4]
mov dword ptr [rdi], 1
mov rsi, rsp # special case for lea rsi, [rsp + 0]
call extfunc(int*, int*)
# ... later accesses would use [rsp] and [rsp+] if needed, not pop
pop rax # alternative to add rsp,8
ret
The push is an 8-byte store, and we overlap half of it. This is not a problem, CPUs can store-forward the unmodified low half efficiently even after storing the high half. Overlapping stores in general are not a problem, and in fact glibc's well-commented memcpy implementation uses two (potentially) overlapping loads + stores for small copies (up to the size of 2x xmm registers at least), to load everything then store everything without caring about whether or not there's overlap.
Note that in 64-bit mode, 32-bit push is not available. So we still have to reference rsp directly for the upper half of of the qword. But if our variables were uint64_t, or we didn't care about making them contiguous, we could just use push.
We have to reference RSP explicitly in this case to get pointers to the locals for passing to another function, so there's no getting around the extra stack-sync uop on Intel CPUs. In other cases maybe you just need to spill some function args for use after a call. (Although normally compilers will push rbx and mov rbx,rdi to save an arg in a call-preserved register, instead of spilling/reloading the arg itself, to shorten the critical path.)
I chose 2x 4-byte args so we could reach a 16-byte alignment boundary with 1 push, so we can optimize away the sub rsp, ## (or dummy push) entirely.
I could have used mov rax, 0x0000000200000001 / push rax, but 10-byte mov r64, imm64 takes 2 entries in the uop cache, and a lot of code-size.
gcc7 does know how to merge two adjacent stores, but chooses not to do that for mov in this case. If both constants had needed 32-bit immediates, it would have made sense. But if the values weren't actually constant at all, and came from registers, this wouldn't work while push / mov [rsp+4] would. (It wouldn't be worth merging values in a register with SHL + SHLD or whatever other instructions to turn 2 stores into 1.)
If you need to reserve space for more than one 8-byte chunk, and don't have anything useful to store there yet, definitely use sub instead of multiple dummy PUSHes after the last useful PUSH. But if you have useful stuff to store, push imm8 or push imm32, or push reg are good.
We can see more evidence of compilers using "canned" sequences with ICC output: it uses lea rdi, [rsp] in the arg setup for the call. It seems they didn't think to look for the special case of the address of a local being pointed to directly by a register, with no offset, allowing mov instead of lea. (mov is definitely not worse, and better on some CPUs.)
An interesting example of not making locals contiguous is a version of the above with 3 args, int a=1, b=2, c=3;. To maintain 16B alignment, we now need to offset 8 + 16*1 = 24 bytes, so we could do
bar3:
push 3
push 2 # don't interleave mov in here; extra stack-sync uops
push 1
mov rdi, rsp
lea rsi, [rsp+8]
lea rdx, [rdi+16] # relative to RDI to save a byte with probably no extra latency even if MOV isn't zero latency, at least not on the critical path
call extfunc3(int*,int*,int*)
add rsp, 24
ret
This is significantly smaller code-size than compiler-generated code, because mov [rsp+16], 2 has to use the mov r/m32, imm32 encoding, using a 4-byte immediate because there's no sign_extended_imm8 form of mov.
push imm8 is extremely compact, 2 bytes. mov dword ptr [rsp+8], 1 is 8 bytes: opcode + modrm + SIB + disp8 + imm32. (RSP as a base register always needs a SIB byte; the ModRM encoding with base=RSP is the escape code for a SIB byte existing. Using RBP as a frame pointer allows more compact addressing of locals (by 1 byte per insn), but takes an 3 extra instructions to set up / tear down, and ties up a register. But it avoids further access to RSP, avoiding stack-sync uops. It could actually be a win sometimes.)
One downside to leaving gaps between your locals is that it may defeat load or store merging opportunities later. If you (the compiler) need to copy 2 locals somewhere, you may be able to do it with a single qword load/store if they're adjacent. Compilers don't consider all the future tradeoffs for the function when deciding how to arrange locals on the stack, as far as I know. We want compilers to run quickly, and that means not always back-tracking to consider every possibility for rearranging locals, or various other things. If looking for an optimization would take quadratic time, or multiply the time taken for other steps by a significant constant, it had better be an important optimization. (IDK how hard it might be to implement a search for opportunities to use push, especially if you keep it simple and don't spend time optimizing the stack layout for it.)
However, assuming there are other locals which will be used later, we can allocate them in the gaps between any we spill early. So the space doesn't have to be wasted, we can simply come along later and use mov [rsp+12], eax to store between two 32-bit values we pushed.
A tiny array of long, with non-constant contents
int ext_longarr(long *);
void longarr_arg(long a, long b, long c) {
long arr[] = {a,b,c};
ext_longarr(arr);
}
gcc/clang/ICC/MSVC follow their normal pattern, and use mov stores:
longarr_arg(long, long, long): # #longarr_arg(long, long, long)
sub rsp, 24
mov rax, rsp # this is clang being silly
mov qword ptr [rax], rdi # it could have used [rsp] for the first store at least,
mov qword ptr [rax + 8], rsi # so it didn't need 2 reg,reg MOVs to avoid clobbering RDI before storing it.
mov qword ptr [rax + 16], rdx
mov rdi, rax
call ext_longarr(long*)
add rsp, 24
ret
But it could have stored an array of the args like this:
longarr_arg_handtuned:
push rdx
push rsi
push rdi # leave stack 16B-aligned
mov rsp, rdi
call ext_longarr(long*)
add rsp, 24
ret
With more args, we start to get more noticeable benefits especially in code-size when more of the total function is spent storing to the stack. This is a very synthetic example that does nearly nothing else. I could have used volatile int a = 1;, but some compilers treat that extra-specially.
Reasons for not building stack frames gradually
(probably wrong) Stack unwinding for exceptions, and debug formats, I think don't support arbitrary playing around with the stack pointer. So at least before making any call instructions, a function is supposed to have offset RSP as much as its going to for all future function calls in this function.
But that can't be right, because alloca and C99 variable-length arrays would violate that. There may be some kind of toolchain reason outside the compiler itself for not looking for this kind of optimization.
This gcc mailing list post about disabling -maccumulate-outgoing-args for tune=default (in 2014) was interesting. It pointed out that more push/pop led to larger unwind info (.eh_frame section), but that's metadata that's normally never read (if no exceptions), so larger total binary but smaller / faster code. Related: this shows what -maccumulate-outgoing-args does for gcc code-gen.
Obviously the examples I chose were trivial, where we're pushing the input parameters unmodified. More interesting would be when we calculate some things in registers from the args (and data they point to, and globals, etc.) before having a value we want to spill.
If you have to spill/reload anything between function entry and later pushes, you're creating extra stack-sync uops on Intel. On AMD, it could still be a win to do push rbx / blah blah / mov [rsp-32], eax (spill to the red zone) / blah blah / push rcx / imul ecx, [rsp-24], 12345 (reload the earlier spill from what's still the red-zone, with a different offset)
Mixing push and [rsp] addressing modes is less efficient (on Intel CPUs because of stack-sync uops), so compilers would have to carefully weight the tradeoffs to make sure they're not making things slower. sub / mov is well-known to work well on all CPUs, even though it can be costly in code-size, especially for small constants.
"It's hard to keep track of the offsets" is a totally bogus argument. It's a computer; re-calculating offsets from a changing reference is something it has to do anyway when using push to put function args on the stack. I think compilers could run into problems (i.e. need more special-case checks and code, making them compile slower) if they had more than 128B of locals, so you couldn't always mov store below RSP (into what's still the red-zone) before moving RSP down with future push instructions.
Compilers already consider multiple tradeoffs, but currently growing the stack frame gradually isn't one of the things they consider. push wasn't as efficient before Pentium-M introduce the stack engine, so efficient push even being available is a somewhat recent change as far as redesigning how compilers think about stack layout choices.
Having a mostly-fixed recipe for prologues and for accessing locals is certainly simpler.
This requires disabling stack frames as well though.
It doesn't, actually. Simple stack frame initialisation can use either enter or push ebp \ mov ebp, esp \ sub esp, x (or instead of the sub, a lea esp, [ebp - x] can be used). Instead of or additionally to these, values can be pushed onto the stack to initialise the variables, or just pushing any random register to move the stack pointer without initialising to any certain value.
Here's an example (for 16-bit 8086 real/V 86 Mode) from one of my projects: https://bitbucket.org/ecm/symsnip/src/ce8591f72993fa6040296f168c15f3ad42193c14/binsrch.asm#lines-1465
save_slice_farpointer:
[...]
.main:
[...]
lframe near
lpar word, segment
lpar word, offset
lpar word, index
lenter
lvar word, orig_cx
push cx
mov cx, SYMMAIN_index_size
lvar word, index_size
push cx
lvar dword, start_pointer
push word [sym_storage.main.start + 2]
push word [sym_storage.main.start]
The lenter macro sets up (in this case) only push bp \ mov bp, sp and then lvar sets up numeric defs for offsets (from bp) to variables in the stack frame. Instead of subtracting from sp, I initialise the variables by pushing into their respective stack slots (which also reserves the stack space needed).

Optimizing a C function call using 64-bit MASM

Currently using this 64-bit MASM code to call a C runtime function such as memcmp(). I recall this convention was from a GoAsm article on optimizations.
memcmp PROTO;:QWORD,:QWORD,:QWORD
PUSH RSP
PUSH QWORD PTR [RSP]
AND SPL,0F0h
MOV R8,R11
MOV RDX,R10
MOV RCX,RAX
SUB RSP,32
CALL memcmp
LEA RSP,[RSP+40]
POP RSP
Is this a valid optimized version below?
memcmp PROTO;:QWORD,:QWORD,:QWORD
PUSH RSP
PUSH QWORD PTR [RSP]
AND RSP,-16 ; new
MOV R8,R11
MOV RDX,R10
MOV RCX,RAX
LEA RSP,[RSP-32] ; new
CALL memcmp
LEA RSP,[RSP+40]
POP RSP
The justification for replacing
AND SPL,0F0h
with
AND RSP,-16
is that it avoids invoke partial register updates. Understanding fastcall stack frame
Replacing
SUB RSP,32
with
LEA RSP,[RSP-32]
is that ensuing instructions do not depend on the flags being updated by the subtraction
then not updating the flags will be more efficient as well.
Why does GCC emit "lea" instead of "sub" for subtraction?
In this case, are there other optimization tricks too?
AND yes, the original code was silly and not saving any code-size (SPL takes a REX prefix, too, like 64-bit operand-size).
LEA - pointless and a waste of code-size: x86 CPUs already avoid false dependencies on FLAGS via register renaming; that's necessary to efficiently run normal x86 code which is full of instructions like add, sub, and, etc. Compilers would use lea much more heavily if that wasn't the case. The answer on that linked Q&A is wrong and should be downvoted / deleted. The only danger is on a few less-common CPUs (Pentium 4 and Silvermont for different reasons) from instructions like inc that only write some flags. (INC instruction vs ADD 1: Does it matter?). Even the cost of inc on Silvermont-family is pretty minor, just an extra uop but not during decode, so it doesn't stall.
add is not slower than lea on any CPUs, either itself or in its influence on later instructions. (Except in-order Atom pre-Silvermont, where lea ran earlier in the pipeline than add (on an actual AGU), so it could be better or worse depending on where data was coming from / going to). You'd only use lea in some cases like an adc loop where you actually need to keep CF unchanged so next iteration can read it. i.e. to not mess up a true dependency (RAW), nothing to do with avoiding a false (WAW) output dependency. (See Problems with ADC/SBB and INC/DEC in tight loops on some CPUs - note that cases where adc / inc / adc creates a partial-flag stall are cases where add would cause a correctness problem, so I'm not counting that as a case where add would make later instructions faster.)
You probably don't need to save the old RSP; the ABI requires 16-byte stack alignment before a call, and that includes your caller (unless you're getting called from code that doesn't follow the ABI, so you don't have known RSP alignment relative to a 16-byte boundary).
Normally you'd just do sub rsp, 40 like a compiler would, to realign RSP and reserve space for the shadow space. (And you'd do this at the top/bottom of the function, not around every call, along with saving/restoring call-preserved registers).
(In practice memcmp is unlikely to care about stack alignment, unless it needs to save/restore some more XMM regs. The Windows x64 calling convention unwisely only has 6 call-clobbered x/ymm registers, and that might be slightly tight depending on how much loop unrolling they do in a hand-written(?) memcmp.)
And even if you did need to handle an unknown incoming RSP alignment, saving RSP to two different locations for pop rsp is still not a very efficient way to go about it. Normally you'd just use RBP to make a traditional frame pointer to clean up with mov rsp, rbp / pop rbp, which works regardless of unknown adjustment to RSP. e.g. even in functions that use alloca (or in asm, that do an unknown number of pushes or variable-sized sub rsp, which is effectively the same thing as and rsp, -16).

What does an lea instruction after a ret instruction mean?

I found x86 lea instructions in an executable file made using clang and gcc.
The lea instructions are after the ret instruction as shown below.
0x???????? <func>
...
pop %ebx
pop %ebp
ret
lea 0x0(%esi,%eiz,1),%esi
lea 0x0(%edi,%eiz,1),%edi
0x???????? <next_func>
...
What are these lea instructions used for? There is no jmp instruction to the lea instructions.
My environment is Ubuntu 12.04 32-bit and gcc 4.6.3.
It's probably not anything--it's just padding to let the next function start at an address that's probably a multiple of at least 8 (and quite possibly 16).
Depending on the rest of the code, it's possible that it's actually a table. Some implementations of a switch statement, for example, use a constant table that's often stored in the code segment (even though, strictly speaking, it's more like data than code).
The first is a lot more likely though. As an aside, such space is often filled with 0x03 instead. This is a single-byte debug-break instruction, so if some undefined behavior results in attempting to execute that code, it immediately stops execution and breaks to the debugger (if available).

Assembly: push vs movl

I have some C code that I compiled with gcc:
int main() {
int x = 1;
printf("%d\n",x);
return 0;
}
I've run it through gdb 7.9.1 and come up with this assembler code for main:
0x0000000100000f40 <+0>: push %rbp # save original frame pointer
0x0000000100000f41 <+1>: mov %rsp,%rbp # stack pointer is new frame pointer
0x0000000100000f44 <+4>: sub $0x10,%rsp # make room for vars
0x0000000100000f48 <+8>: lea 0x47(%rip),%rdi # 0x100000f96
0x0000000100000f4f <+15>: movl $0x0,-0x4(%rbp) # put 0 on the stack
0x0000000100000f56 <+22>: movl $0x1,-0x8(%rbp) # put 1 on the stack
0x0000000100000f5d <+29>: mov -0x8(%rbp),%esi
0x0000000100000f60 <+32>: mov $0x0,%al
0x0000000100000f62 <+34>: callq 0x100000f74
0x0000000100000f67 <+39>: xor %esi,%esi # set %esi to 0
0x0000000100000f69 <+41>: mov %eax,-0xc(%rbp)
0x0000000100000f6c <+44>: mov %esi,%eax
0x0000000100000f6e <+46>: add $0x10,%rsp # move stack pointer to original location
0x0000000100000f72 <+50>: pop %rbp # reclaim original frame pointer
0x0000000100000f73 <+51>: retq
As I understand it, push %rbb pushes the frame pointer onto the stack, so we can retrieve it later with pop %rbp. Then, sub $0x10,%rsp clears 10 bytes of room on the stack so we can put stuff on it.
Later interactions with the stack move variables directly into the stack via memory addressing, rather than pushing them onto the stack:
movl $0x0, -0x4(%rbp)
movl $0x1, -0x8(%rbp)
Why does the compiler use movl rather than push to get this information onto the stack?
Does referencing the register after the memory address also put that value into that register?
It is very common for modern compilers to move the stack pointer once at the beginning of a function, and move it back at the end. This allows for more efficient indexing because it can treat the memory space as a memory mapped region rather than a simple stack. For example, values which are suddenly found to be of no use (perhaps due to an optimized shortcutted operator) can be ignored, rather than forcing one to pop them off the stack.
Perhaps in simpler days, there was a performance reason to use push. With modern processors, there is no advantage, so there's no reason to make special cases in the compiler to use push/pop when possible. It's not like compiler-written assembly code is readable!
While Cort is correct, there is another important reason for this practice of apparently allocating space on the stack. According to the ABI, function calls must find the stack 16 byte aligned. Rather than fiddling with the stack every single time a call needs to be made from a function, it is generally easier and more efficient to adjust the stack for proper alignment first and then modify the values that might otherwise have been pushed onto it.
So, the stack is absolutely adjusted for local variable space, but it is also adjusted to provide correct stack alignment for calls into the standard library.
I'm not an authority on assemblers or compilers but I've played around with MASM back in the day and did spend a whole bunch of time with WinDbg while debugging production C++ issues.
I think the answer to your question is because it's easier.
push/pop instructions write to and read from the stack but they also modify the stack as they are processed. C/C++ compiler uses stack for all its local variables. It does it by shifting stack pointer by the exact number of bytes that is needed to hold all local variables and it does so right when you enter the function.
After that reading and writing all those variables can be done from anywhere in the function and also as many times as you want by simply using the mov instructions. If you look at pure assembly, you might question why create a hole in the stack just to copy two values into that space using mov when you could have done two push instructions.
But look at it from compiler author perspective. The process of entering a function, and allocating stack for local variables is coded separately and is completely decoupled from the process of reading/writing those variables.

stack operations on a basic C program

Im disassembling this basic C code, trying to figure out what operations
are done on the stack. Im doing in it on a vm, 32 bit, gcc 4.4.3, ubuntu based
distro. I compiled the code with this flags.
gcc -ggdb -mpreferred-stack-boundary=2 -fno-stack-protector -o ExploitMe ExploitMe.c
#include<stdio.h>
#include<string.h>
main(int argc, char **argv)
{
char buffer[80];
strcpy(buffer, argv[1]);
return 1;
}
The problems is that i cannot figure out why on operation 3, the stack
pointer is moved 0x58, the char is 80 characters long, shouldnt it be 0x50 ?
dump of assembler code for function main:
0x080483e4 <+0>: push %ebp
0x080483e5 <+1>: mov %esp,%ebp
=> 0x080483e7 <+3>: sub $0x58,%esp
0x080483ea <+6>: mov 0xc(%ebp),%eax
0x080483ed <+9>: add $0x4,%eax
0x080483f0 <+12>:mov (%eax),%eax
0x080483f2 <+14>:mov %eax,0x4(%esp)
0x080483f6 <+18>:lea -0x50(%ebp),%eax
0x080483f9 <+21>:mov %eax,(%esp)
0x080483fc <+24>:call 0x804831c <strcpy#plt>
0x08048401 <+29>:mov $0x1,%eax
0x08048406 <+34>:leave
0x08048407 <+35>:ret
End of assembler dump.
Im stuck on it, i see later that is taking the exected lenght but what
is the program making between those ops ?¿
0x080483f6 <+18>:lea -0x50(%ebp),%eax
Thank you
The compiler is free to arrange the stack however it sees fit.
The other 8 bytes are for the arguments to strcpy. Rather than push them on to the stack, the compiler has realised that it can simply subtract an extra 8 bytes from the stack pointer and then store the registers to memory. This means that the stack pointer only has to be adjusted once.
it is probably allocating a couple more locations for storing the passed in parameters (argv, argc). and/or it needs some more local storage. Compilers do whatever they want to implement the high level code, the same code will produce dozens/hundreds of different assembly langauge sequences depending on the compiler, version, and optimization settings as well as configure/build settings when the compiler itself was compiled.
You often see this sort of a stack frame though and usually due to a combination of performance and instruction set features/limitations. Much easier to code and debug if you move the stack pointer once or make a copy of it with another register, within the function everything is referenced to one static point while the prepparing, calling, and cleaning up of functions messes with the real stack pointer.
You will often also see that the stack frame leaves room for the passed in parameters and other local variables even if optimization has removed the need for those variables to actually spend any time on the stack. Up front the need for a stack frame and size is determined and optimization comes later and the compiler doesnt always go back and realize that if it makes another pass on the function it can make the stack frame smaller. Likewise the compiler writer can more easily debug if they know that their stack frame always starts with passed in parameters then the local variables in order, very fast and easy to read and debug the code, just an example.
Bottom line though is Oli's answer, the compiler can do whatever it wants so long as it implements your code. My extension to that is the output from the same high level code varies widely depending on the compiler and options. And it is rarely perfectly optimized.

Resources