How do I run a function from its hex version in C? - c

Say I want to convert a certain function into hex
void func(char* string) {
puts(string);
}
1139: 55 push %rbp
113a: 48 89 e5 mov %rsp,%rbp
113d: 48 83 ec 10 sub $0x10,%rsp
1141: 48 89 7d f8 mov %rdi,-0x8(%rbp)
1145: 48 8b 45 f8 mov -0x8(%rbp),%rax
1149: 48 89 c7 mov %rax,%rdi
114c: e8 df fe ff ff callq 1030 <puts#plt>
1151: 90 nop
1152: c9 leaveq
1153: c3 retq
This is what I got on x86_64: \x55\x48\x89\xe5\x48\x83\xec\x10\x48\x89\x7d\xf8\x48\x8b\x45\xf8\x48\x89\xc7\xe8\xdf\xfe\xff\xff\x90\xc9\xc3
encrypt it and use it in this program. A decryptor at the start to decrypt these instructions at run time so it can't be analyzed statically.
Converting the above function into hex and creating a function pointer for it doesn't run and ends with SIGSEGV at push %rbp.
My aim is to make this code print Hi.
int main() {
char* decrypted = decrypt(hexcode);
void (*func)(char*) = (void)(*)(char)) decrypted;
func("HI");
}
My questions are:
How do I convert a function into hex properly.
How do I then run this hex code from main as shown above?

If you want to execute a binary blob; then you need to do something like this:
void *p = mmap(0, blob_size, PROT_WRITE, MAP_ANON, NOFD, 0);
read(blob_file, p, blob_size);
mprotect(p, blob_size, PROT_EXEC);
void (*UndefinedBehaviour)(char *x) = p;
UndefinedBehaviour("HI");
The allocates some memory, copies a blob into it, changes the memory to be PROT_EXEC, then invokes the blob at its beginning. You need to add some error checking, and depending upon what sort of system you are on, it may be running malware monitors to prevent you from doing this.

Answer for 1. : It is near impossible to do it automatically, because there is no simple way for determining the length of function code - it depends to machine CPU, compiler optimizations etc. Only way is "manual" analysis of disassembled binary.

You can't for those instructions because they're not fully position-independent and self-contained.
e8 df fe ff ff is a call rel32 (with a little-endian relative displacement as the call target). It only works if that displacement reaches the puts#plt stub, and that only happens in the executable you're disassembling, where this code appears at a fixed distance from the PLT. (So the executable itself is position-independent when relocated as a whole, but taking the machine code for one function and trying to run it from some other address will break.)
In theory you could fixup the call target using a function pointer to puts in some code that included this machine code in an array, but if you're trying to make shellcode you can't depend on the "target" process helping you that way.
Instead you should use system calls directly via the syscall instruction, for example Linux syscall with RAX=1=__NR_write is write. (Not via their libc wrapper functions like write(), that would have exactly the same problem as puts).
Then you can refer to How to get c code to execute hex bytecode? for how to put machine code in a C array, make sure that's in an executable page (e.g. gcc -z execstack or mprotect or mmap), and cast that to a function pointer + call it like you're doing here.
ends with SIGSEGV at push %rbp
Yup, code-fetch from a page without EXEC permission will do that. gcc -z execstack is an easy way to fix that, or mmap like other answers suggest, at which point execution will get as far as the call -289 and fault or run bad instructions.

Related

Tiny C Compiler's generated code emits extra (unnecessary?) NOPs and JMPs

Can someone explain why this code:
#include <stdio.h>
int main()
{
return 0;
}
when compiled with tcc using tcc code.c produces this asm:
00401000 |. 55 PUSH EBP
00401001 |. 89E5 MOV EBP,ESP
00401003 |. 81EC 00000000 SUB ESP,0
00401009 |. 90 NOP
0040100A |. B8 00000000 MOV EAX,0
0040100F |. E9 00000000 JMP fmt_vuln1.00401014
00401014 |. C9 LEAVE
00401015 |. C3 RETN
I guess that
00401009 |. 90 NOP
is maybe there for some memory alignment, but what about
0040100F |. E9 00000000 JMP fmt_vuln1.00401014
00401014 |. C9 LEAVE
I mean why would compiler insert this near jump that jumps to the next instruction, LEAVE would execute anyway?
I'm on 64-bit Windows generating 32-bit executable using TCC 0.9.26.
Superfluous JMP before the Function Epilogue
The JMP at the bottom that goes to the next statement, this was fixed in a commit. Version 0.9.27 of TCC resolves this issue:
When 'return' is the last statement of the top-level block
(very common and often recommended case) jump is not needed.
As for the reason it existed in the first place? The idea is that each function has a possible common exit point. If there is a block of code with a return in it at the bottom, the JMP goes to a common exit point where stack cleanup is done and the ret is executed. Originally the code generator also emitted the JMP instruction erroneously at the end of the function too if it appeared just before the final } (closing brace). The fix checks to see if there is a return statement followed by a closing brace at the top level of the function. If there is, the JMP is omitted
An example of code that has a return at a lower scope before a closing brace:
int main(int argc, char *argv[])
{
if (argc == 3) {
argc++;
return argc;
}
argc += 3;
return argc;
}
The generated code looks like:
401000: 55 push ebp
401001: 89 e5 mov ebp,esp
401003: 81 ec 00 00 00 00 sub esp,0x0
401009: 90 nop
40100a: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
40100d: 83 f8 03 cmp eax,0x3
401010: 0f 85 11 00 00 00 jne 0x401027
401016: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
401019: 89 c1 mov ecx,eax
40101b: 40 inc eax
40101c: 89 45 08 mov DWORD PTR [ebp+0x8],eax
40101f: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
; Jump to common function exit point. This is the `return argc` inside the if statement
401022: e9 11 00 00 00 jmp 0x401038
401027: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
40102a: 83 c0 03 add eax,0x3
40102d: 89 45 08 mov DWORD PTR [ebp+0x8],eax
401030: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
; Jump to common function exit point. This is the `return argc` at end of the function
401033: e9 00 00 00 00 jmp 0x401038
; Common function exit point
401038: c9 leave
401039: c3 ret
In versions prior to 0.9.27 the return argc inside the if statement would jump to a common exit point (function epilogue). As well the return argc at the bottom of the function also jumps to the same common exit point of the function. The problem is that the common exit point for the function happens to be right after the top level return argcso the side effect is an extra JMP that happens to be to the next instruction.
NOP after Function Prologue
The NOP isn't for alignment. Because of the way Windows implements guard pages for the stack (Programs that are in Portable Executable format) TCC has two types of prologues. If the local stack space required < 4096 (smaller than a single page) then you see this kind of code generated:
401000: 55 push ebp
401001: 89 e5 mov ebp,esp
401003: 81 ec 00 00 00 00 sub esp,0x0
The sub esp,0 isn't optimized out. It is the amount of stack space needed for local variables (in this case 0). If you add some local variables you will see the 0x0 in the SUB instruction changes to coincide with the amount of stack space needed for local variables. This prologue requires 9 bytes. There is another prologue to handle the case where the stack space needed is >= 4096 bytes. If you add an array of 4096 bytes with something like:
char somearray[4096]
and look at the resulting instruction you will see the function prologue change to a 10 byte prologue:
401000: b8 00 10 00 00 mov eax,0x1000
401005: e8 d6 00 00 00 call 0x4010e0
TCC's code generator assumes that the function prologue is always 10 bytes when targeting WinPE. This is primarily because TCC is a single pass compiler. The compiler doesn't know how much stack space a function will use until after the function is processed. To get around not knowing this ahead of time, TCC pre-allocates 10 bytes for the prologue to fit the largest method. Anything shorter is padded to 10 bytes.
In the case where stack space needed < 4096 bytes the instructions used total 9 bytes. The NOP is used to pad the prologue to 10 bytes. For the case where >= 4096 bytes are needed, the number of bytes is passed in EAX and the function __chkstk is called to allocate the required stack space instead.
TCC is not an optimizing compiler, at least not really. Every single instruction it emitted for main is sub-optimal or not needed at all, except the ret. IDK why you thought the JMP was the only instruction that might not make sense for performance.
This is by design: TCC stands for Tiny C Compiler. The compiler itself is designed to be simple, so it intentionally doesn't include code to look for many kinds of optimizations. Notice the sub esp, 0: this useless instruction clearly come from filling in a function-prologue template, and TCC doesn't even look for the special case where the offset is 0 bytes. Other function need stack space for locals, or to align the stack before any child function calls, but this main() doesn't. TCC doesn't care, and blindly emits sub esp,0 to reserve 0 bytes.
(In fact, TCC is truly one pass, laying out machine code as it does through the C statement by statement. It uses the imm32 encoding for sub so it will have room to fill in the right number (upon reaching the end of the function) even if it turns out the function uses more than 255 bytes of stack space. So instead of constructing a list of instructions in memory to finish assembling later, it just remembers one spot to fill in a uint32_t. That's why it can't omit the sub when it turns out not to be needed.)
Most of the work in creating a good optimizing compiler that anyone will use in practice is the optimizer. Even parsing modern C++ is peanuts compared to reliably emitting efficient asm (which not even gcc / clang / icc can do all the time, even without considering autovectorization). Just generating working but inefficient asm is easy compared to optimizing; most of gcc's codebase is optimization, not parsing. See Basile's answer on Why are there so few C compilers?
The JMP (as you can see from #MichaelPetch's answer) has a similar explanation: TCC (until recently) didn't optimize the case where a function only has one return path, and doesn't need to JMP to a common epilogue.
There's even a NOP in the middle of the function. It's obviously a waste of code bytes and decode / issue front-end bandwidth and out-of-order window size. (Sometimes executing a NOP outside a loop or something is worth it to align the top of a loop which is branched to repeatedly, but a NOP in the middle of a basic block is basically never worth it, so that's not why TCC put it there. And if a NOP did help, you could probably do even better by reordering instructions or choosing larger instructions to do the same thing without a NOP. Even proper optimizing compilers like gcc/clang/icc don't try to predict this kind of subtle front-end effect.)
#MichaelPetch points out that TCC always wants its function prologue to be 10 bytes, because it's a single-pass compiler (and it doesn't know how much space it needs for locals until the end of the function, when it comes back and fills in the imm32). But Windows targets need stack probes when modifying ESP / RSP by more than a whole page (4096 bytes), and the alternate prologue for that case is 10 bytes, instead of 9 for the normal one without the NOP. So this is another tradeoff favouring compilation speed over good asm.
An optimizing compiler would xor-zero EAX (because that's smaller and at least as fast as mov eax,0), and leave out all the other instruction. Xor-zeroing is one of the most well-known / common / basic x86 peephole optimizations, and has several advantages other than code-size on some modern x86 microarchitectures.
main:
xor eax,eax
ret
Some optimizing compilers might still make a stack frame with EBP, but tearing it down with pop ebp would be strictly better than leave on all CPUs, for this special case where ESP = EBP so the mov esp,ebp part of leave isn't needed. pop ebp is still 1 byte, but it's also a single-uop instruction on modern CPUs, unlike leave which is 2 or 3 on modern CPUs. (http://agner.org/optimize/, and see also other performance optimization links in the x86 tag wiki.) This is what gcc does. It's a fairly common situation; if you push some other registers after making a stack frame, you have to point ESP at the right place before pop ebx or whatever. (Or use mov to restore them.)
The benchmarks TCC cares about are compilation speed, not quality (speed or size) of the resulting code. For example, the TCC web site has a benchmark in lines/sec and MB/sec (of C source) vs. gcc3.2 -O0, where it's ~9x faster on a P4.
However, TCC is not totally braindead: it will apparently do some inlining, and as Michael's answer points out, a recent patch does leave out the JMP (but still not the useless sub esp, 0).

Segmentation fault when calling a function located in the heap

I'm trying to tweak the rules a little bit here, and malloc a buffer,
then copy a function to the buffer.
Calling the buffered function works, but the function throws a Segmentation fault when i'm trying to call another function within.
Any thoughts why?
#include <stdio.h>
#include <sys/mman.h>
#include <unistd.h>
#include <stdlib.h>
int foo(int x)
{
printf("%d\n", x);
}
int bar(int x)
{
}
int main()
{
int foo_size = bar - foo;
void* buf_ptr;
buf_ptr = malloc(1024);
memcpy(buf_ptr, foo, foo_size);
mprotect((void*)(((int)buf_ptr) & ~(sysconf(_SC_PAGE_SIZE) - 1)),
sysconf(_SC_PAGE_SIZE),
PROT_READ|PROT_WRITE|PROT_EXEC);
int (*ptr)(int) = buf_ptr;
printf("%d\n", ptr(3));
return 0;
}
This code will throw a segfault, unless i'll change the foo function to:
int foo(int x)
{
//Anything but calling another function.
x = 4;
return x;
}
NOTE:
The code successfully copies foo into the buffer, i know i made some assumptions, but on my platform they're ok.
Your code is not position independent and even if it were, you don't have the correct relocations to move it to an arbitrary position. Your call to printf (or any other function) will be done with pc-relative addressing (through the PLT, but that's besides the point here). This means that the instruction generated to call printf isn't a call to a static address but rather "call the function X bytes from the current instruction pointer". Since you moved the code the call is done to a bad address. (I'm assuming i386 or amd64 here, but generally it's a safe assumption, people who are on weird platforms usually mention that).
More specifically, x86 has two different instructions for function calls. One is a call relative to the instruction pointer which determines the destination of the function call by adding a value to the current instruction pointer. This is the most commonly used function call. The second instruction is a call to a pointer inside a register or memory location. This is much less commonly used by compilers because it requires more memory indirections and stalls the pipeline. The way shared libraries are implemented (your call to printf will actually go to a shared library) is that for every function call you make outside of your own code the compiler will insert fake functions near your code (this is the PLT I mentioned above). Your code does a normal pc-relative call to this fake function and the fake function will find the real address to printf and call that. It doesn't really matter though. Almost any normal function call you make will be pc-relative and will fail. Your only hope in code like this are function pointers.
You might also run into some restrictions on executable mprotect. Check the return value of mprotect, on my system your code doesn't work for one more reason: mprotect doesn't allow me to do this. Probably because the backend memory allocator of malloc has additional restrictions that prevents executable protections of its memory. Which leads me to the next point:
You will break things by calling mprotect on memory that isn't managed by you. That includes memory you got from malloc. You should only mprotect things you've gotten from the kernel yourself through mmap.
Here's a version that demonstrates how to make this work (on my system):
#include <stdio.h>
#include <sys/mman.h>
#include <unistd.h>
#include <string.h>
#include <err.h>
int
foo(int x, int (*fn)(const char *, ...))
{
fn("%d\n", x);
return 42;
}
int
bar(int x)
{
return 0;
}
int
main(int argc, char **argv)
{
size_t foo_size = (char *)bar - (char *)foo;
int ps = getpagesize();
void *buf_ptr = mmap(NULL, ps, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_ANON|MAP_PRIVATE, -1, 0);
if (buf_ptr == MAP_FAILED)
err(1, "mmap");
memcpy(buf_ptr, foo, foo_size);
int (*ptr)(int, int (*)(const char *, ...)) = buf_ptr;
printf("%d\n", ptr(3, printf));
return 0;
}
Here, I abuse the knowledge of how the compiler will generate the code for the function call. By using a function pointer I force it to generate a call instruction that isn't pc-relative. Also, I manage the memory allocation myself so that we get the right permissions from start and not run into any restrictions that brk might have. As a bonus we do error handling that actually helped me find a bug in the first version of this experiment and I also corrected other minor bugs (like missing includes) which allowed me to enable warnings in the compiler and catch another potential problem.
If you want to dig deeper into this you can do something like this. I added two versions of the function:
int
oldfoo(int x)
{
printf("%d\n", x);
return 42;
}
int
foo(int x, int (*fn)(const char *, ...))
{
fn("%d\n", x);
return 42;
}
Compile the whole thing and disassemble it:
$ cc -Wall -o foo foo.c
$ objdump -S foo | less
We can now look at the two generated functions:
0000000000400680 <oldfoo>:
400680: 55 push %rbp
400681: 48 89 e5 mov %rsp,%rbp
400684: 48 83 ec 10 sub $0x10,%rsp
400688: 89 7d fc mov %edi,-0x4(%rbp)
40068b: 8b 45 fc mov -0x4(%rbp),%eax
40068e: 89 c6 mov %eax,%esi
400690: bf 30 08 40 00 mov $0x400830,%edi
400695: b8 00 00 00 00 mov $0x0,%eax
40069a: e8 91 fe ff ff callq 400530 <printf#plt>
40069f: b8 2a 00 00 00 mov $0x2a,%eax
4006a4: c9 leaveq
4006a5: c3 retq
00000000004006a6 <foo>:
4006a6: 55 push %rbp
4006a7: 48 89 e5 mov %rsp,%rbp
4006aa: 48 83 ec 10 sub $0x10,%rsp
4006ae: 89 7d fc mov %edi,-0x4(%rbp)
4006b1: 48 89 75 f0 mov %rsi,-0x10(%rbp)
4006b5: 8b 45 fc mov -0x4(%rbp),%eax
4006b8: 48 8b 55 f0 mov -0x10(%rbp),%rdx
4006bc: 89 c6 mov %eax,%esi
4006be: bf 30 08 40 00 mov $0x400830,%edi
4006c3: b8 00 00 00 00 mov $0x0,%eax
4006c8: ff d2 callq *%rdx
4006ca: b8 2a 00 00 00 mov $0x2a,%eax
4006cf: c9 leaveq
4006d0: c3 retq
The instruction for the function call in the printf case is "e8 91 fe ff ff". This is a pc-relative function call. 0xfffffe91 bytes in front of our instruction pointer. It's treated as a signed 32 bit value, and the instruction pointer used in the calculation is the address of the next instruction. So 0x40069f (next instruction) - 0x16f (0xfffffe91 in front is 0x16f bytes behind with signed math) gives us the address 0x400530, and looking at the disassembled code I find this at the address:
0000000000400530 <printf#plt>:
400530: ff 25 ea 0a 20 00 jmpq *0x200aea(%rip) # 601020 <_GLOBAL_OFFSET_TABLE_+0x20>
400536: 68 01 00 00 00 pushq $0x1
40053b: e9 d0 ff ff ff jmpq 400510 <_init+0x28>
This is the magic "fake function" I mentioned earlier. Let's not get into how this works. It's necessary for shared libraries to work and that's all we need to know for now.
The second function generates the function call instruction "ff d2". This means "call the function at the address stored inside the rdx register". No pc-relative addressing and that's why it works.
The compiler is free to generate the code the way it wants provided the observable results are correct (as if rule). So what you do is just an undefined behaviour invocation.
Visual Studio sometimes uses relays. That means that the address of a function just points to a relative jump. That's perfectly allowed per standard because of the as is rule but it would definitely break that kind of construction. Another possibility is to have local internal functions called with relative jumps but outside of the function itself. In that case, your code would not copy them, and the relative calls will just point to random memory. That means that with different compilers (or even different compilation options on same compiler) it could give expected result, crash, or directly end the program without error which is exactly UB.
I think I can explain a bit. First of all, if both your functions have no return statement within, an undefined behaviour is invoked as per standard ยง6.9.1/12. Secondly, which is most common on a lot of platforms, and yours apparently as well, is the following: relative addresses of functions are hardcoded into binary code of functions. That means, that if you have a call of "printf" within "foo" and then you move (e.g. execute) from another location, that address, from which "printf" should be called, turns bad.

why does compiler store variables in register? [duplicate]

This question already exists:
Why are registers needed (why not only use memory)? [duplicate]
Closed 1 year ago.
Hi I have been reading this kind of stuff in various docs
register
Tells the compiler to store the variable being declared in a CPU register.
In standard C dialects, keyword register uses the following syntax:
register data-definition;
The register type modifier tells the compiler to store the variable being declared in a CPU register (if possible), to optimize access. For example,
register int i;
Note that TIGCC will automatically store often used variables in CPU registers when the optimization is turned on, but the keyword register will force storing in registers even if the optimization is turned off. However, the request for storing data in registers may be denied, if the compiler concludes that there is not enough free registers for use at this place.
http://tigcc.ticalc.org/doc/keywords.html#register
My point is not only about register. My point is why would a compiler stores the variables in memory. The compiler business is to just compile and to generate an object file. At run time the actual memory allocation happens. why would compiler does this business. I mean without running the object file just by compiling the file itself does the memory allocation happens in case of C?
The compiler is generating machine code, and the machine code is used to run your program. The compiler decides what machine code it generates, therefore making decisions about what sort of allocation will happen at runtime. It's not executing them when you type gcc foo.c but later, when you run the executable, it's the code GCC generated that's running.
This means that the compiler wants to generate the fastest code possible and makes as many decisions as it can at compile time, this includes how to allocate things.
The compiler doesn't run the code (unless it does a few rounds for profiling and better code execution), but it has to prepare it - this includes how to keep the variables your program defines, whether to use fast and efficient storage as registers, or using the slower (and more prone to side effects) memory.
Initially, your local variables would simply be assigned location on the stack frame (except of course for memory you explicitly use dynamic allocation for). If your function assigned an int, your compiler would likely tell the stack to grow by a few additional bytes and use that memory address for storing that variable and passing it as operand to any operation your code is doing on that variable.
However, since memory is slower (even when cached), and manipulating it causes more restrictions on the CPU, at a later stage the compiler may decide to try moving some variables into registers. This allocation is done through a complicated algorithm that tries to select the most reused and latency critical variables that can fit within the existing number of logical registers your architecture has (While confirming with various restrictions such as some instructions requiring the operand to be in this or that register).
There's another complication - some memory addresses may alias with external pointers in manners unknown at compilation time, in which case you can not move them into registers. Compilers are usually a very cautious bunch and most of them would avoid dangerous optimizations (otherwise they're need to put up some special checks to avoid nasty things).
After all that, the compiler is still polite enough to let you advise which variable it important and critical to you, in case he missed it, and by marking these with the register keyword you're basically asking him to make an attempt to optimize for this variable by using a register for it, given enough registers are available and that no aliasing is possible.
Here's a little example: Take the following code, doing the same thing twice but with slightly different circumstances:
#include "stdio.h"
int j;
int main() {
int i;
for (i = 0; i < 100; ++i) {
printf ("i'm here to prevent the loop from being optimized\n");
}
for (j = 0; j < 100; ++j) {
printf ("me too\n");
}
}
Note that i is local, j is global (and therefore the compiler doesn't know if anyone else might access him during the run).
Compiling in gcc with -O3 produces the following code for main:
0000000000400540 <main>:
400540: 53 push %rbx
400541: bf 88 06 40 00 mov $0x400688,%edi
400546: bb 01 00 00 00 mov $0x1,%ebx
40054b: e8 18 ff ff ff callq 400468 <puts#plt>
400550: bf 88 06 40 00 mov $0x400688,%edi
400555: 83 c3 01 add $0x1,%ebx # <-- i++
400558: e8 0b ff ff ff callq 400468 <puts#plt>
40055d: 83 fb 64 cmp $0x64,%ebx
400560: 75 ee jne 400550 <main+0x10>
400562: c7 05 80 04 10 00 00 movl $0x0,1049728(%rip) # 5009ec <j>
400569: 00 00 00
40056c: bf c0 06 40 00 mov $0x4006c0,%edi
400571: e8 f2 fe ff ff callq 400468 <puts#plt>
400576: 8b 05 70 04 10 00 mov 1049712(%rip),%eax # 5009ec <j> (loads j)
40057c: 83 c0 01 add $0x1,%eax # <-- j++
40057f: 83 f8 63 cmp $0x63,%eax
400582: 89 05 64 04 10 00 mov %eax,1049700(%rip) # 5009ec <j> (stores j back)
400588: 7e e2 jle 40056c <main+0x2c>
40058a: 5b pop %rbx
40058b: c3 retq
As you can see, the first loop counter sits in ebx, and is incremented on each iteration and compared against the limit.
The second loop however was the dangerous one, and gcc decided to pass the index counter through memory (loading it into rax every iteration). This example serves to show how better off you'd be when using registers, as well as how sometimes you can't.
The compiler needs to translate the code into machine instruction, and tell the computer how to run the code. That include how to make operations (like multiply two numbers) and how to store the data (stack, heap or register).

Does main have a return address, dynamic link or return value in C?

According to our book, each function has an activation record in the run-time stack in C. Each of these activation records has a return address, dynamic link, and return value. Does main have these also?
All of these terms are purely implementation details - C has no notion of "return addresses" or "dynamic links." It doesn't even have a notion of a "stack" at all. Most implementations of C have these objects in them, and in those implementations it is possible that they exist for main. However, there is no requirement that this happen.
Hope this helps!
If you disassemble functions you will realize that most of the time the stack doesn't even contain the return value - often times the EAX register does (intel x86).
You can also look up "calling conventions" - it all pretty much depends on the compiler.
C is a language, how it's interpreted into machine code is not 'its' business.
While this depends on the implementation, it is worthy looking at a C program compiled with gcc. If you run objdump -d executable, you will see it disassembled and you can see how main() behaves. Here's an example:
08048680 <_start>:
...
8048689: 54 push %esp
804868a: 52 push %edx
804868b: 68 a0 8b 04 08 push $0x8048ba0
8048690: 68 30 8b 04 08 push $0x8048b30
8048695: 51 push %ecx
8048696: 56 push %esi
8048697: 68 f1 88 04 08 push $0x80488f1
804869c: e8 9f ff ff ff call 8048640 <__libc_start_main#plt>
80486a1: f4 hlt
...
080488f1 <main>:
80488f1: 55 push %ebp
80488f2: 89 e5 mov %esp,%ebp
80488f4: 57 push %edi
80488f5: 56 push %esi
80488f6: 53 push %ebx
...
8048b2b: 5b pop %ebx
8048b2c: 5e pop %esi
8048b2d: 5f pop %edi
8048b2e: 5d pop %ebp
8048b2f: c3 ret
You can see that main behaves similarly to a regular function in that it returns normally. In fact, if you look at the linux base documentation, you'll see that the call to __libc_start_main that we see from _start actually requires main to behave like a regular function.
In C/C++, main() is written just like a function, but isn't one. For example, it isn't allowed to call main(), it has several possible prototypes (can't do that in C!). Whatever is returned from it gets passed to the operating system (and the program ends).
Individual C implementations might handle main() like a function called from "outside" for uniformity, but nobody forces them to do so (or disallow switching to some other form of doing it without telling anybody). There are traditional ways of implementing C, but nobody is forced to do it that way. It is just the simplest way on our typical architectures.

learning disassembly

In an attempt to understand what occurs underneath I am making small C programs and then reversing it, and trying to understand its objdump output.
The C program is:
#include <stdio.h>
int function(int a, int b, int c) {
printf("%d, %d, %d\n", a,b,c);
}
int main() {
int a;
int *ptr;
asm("nop");
function(1,2,3);
}
The objdump output for function gives me the following.
080483a4 <function>:
80483a4: 55 push ebp
80483a5: 89 e5 mov ebp,esp
80483a7: 83 ec 08 sub esp,0x8
80483aa: ff 75 10 push DWORD PTR [ebp+16]
80483ad: ff 75 0c push DWORD PTR [ebp+12]
80483b0: ff 75 08 push DWORD PTR [ebp+8]
80483b3: 68 04 85 04 08 push 0x8048504
80483b8: e8 fb fe ff ff call 80482b8 <printf#plt>
80483bd: 83 c4 10 add esp,0x10
80483c0: c9 leave
Notice that before the call to printf, three DWORD's with offsets 8,16,12(they must be the arguments to function in the reverse order) are being pushed onto the stack. Later a hex address which must be the address of the format string is being pushed.
My doubt is
Rather than pushing 3 DWORDS and the format specifier onto the stack directly, I expected to see the esp being manually decremented and the values being pushed onto the stack after that. How can one explain this behaviour?
Well, some machines have a stack pointer that is kind of like any other register, so the way you push something is, yes, with a decrement followed by a store.
But some machines, like x8632/64 have a push instruction that does a macro-op: decrementing the pointer and doing the store.
Macro-ops, btw, have a funny history. At times, certain examples on certain machines have been slower than performing the elementary operations with simple instructions.
I doubt if that's frequently the case today. Modern x86 is amazingly sophisticated. The CPU will be disassembling your opcodes themselves into micro-ops which it then stores in a cache. The micro-ops have specific pipeline and time slot requirements and the end result is that there is a RISC cpu inside the x86 these days, and the whole thing goes really fast and has good architectural-layer code density.
The stack pointer is adjusted with the push instruction. So it's copied to ebp and the parameters are pushed onto the stack so they exist in 2 places each: function's stack and printf's stack. The pushes affect esp, thus ebp is copied.
There is no mov [esp+x], [ebp+y] instruction, too many operands. It would take two instructions and use a register. Push does it in one instruction.
This is a standard cdecl calling convention for x86 machine. There are several different types of calling conventions. You can read the following article in the Wikipedia about it:
http://en.wikipedia.org/wiki/X86_calling_conventions
It explains the basic principle.
You raise an interesting point which I think has not been directly addressed so far. I suppose that you have seen assembly code which looks something like this:
sub esp, X
...
mov [ebp+Y], eax
call Z
This sort of disassembly is generated by certain compilers. What it is doing is extending the stack, then assigning the value of the new space to be eax (which has hopefully been populated with something meaningful by that point). This is actually equivalent to what the push mnemonic does. I can't answer why certain compilers generate this code instead but my guess is that at some point doing it this way was judged to be more efficient.
In your effort to learn assembly language and disassemble binaries, you might find ODA useful. It's a web-based disassembler, which is handy for disassembling lots of different architectures without having to build binutil's objdump for each one of them.
http://onlinedisassembler.com/

Resources