How do you get the size of a C instruction? For example I want to know, if I am working on Windows XP with Intel Core2Duo processor, how much space does the instruction:
while(1==9)
{
printf("test");
}
takes?
C does not have a notion of "instruction", let alone a notion of "size of an instruction", so the question doesn't make sense in the context of "programming in C".
The fundamental element of a C program is a statement, which can be arbitrarily complex.
If you care about the details of your implementation, simply inspect the machine code that your compiler generates. That one is broken down in machine instructions, and you can usually see how many bytes are required to encode each instruction.
For example, I compile the program
#include <cstdio>
int main() { for (;;) std::printf("test"); }
with
g++ -c -o /tmp/out.o /tmp/in.cpp
and disassemble with
objdump -d /tmp/out.o -Mintel
and I get:
00000000 <main>:
0: 55 push ebp
1: 89 e5 mov ebp,esp
3: 83 e4 f0 and esp,0xfffffff0
6: 83 ec 10 sub esp,0x10
9: c7 04 24 00 00 00 00 mov DWORD PTR [esp],0x0
10: e8 fc ff ff ff call 11 <main+0x11>
15: eb f2 jmp 9 <main+0x9>
As you can see, the call instruction requires 5 bytes in my case, and the jump for the loop requires 2 bytes. (Also note the conspicuous absence of a ret statement.)
You can look up the instruction in your favourite x86 documentation and see that E8 is the single-byte call opcode; the subsequent four bytes are the operand.
gcc has an extension that returns the address of a label as a value of type void*.
Maybe you can use that for your intents.
/* UNTESTED --- USE AT YOUR OWN RISK */
labelbegin:
while (1 == 9) {
printf("test\n");
}
labelend:
printf("length: %d\n",
(int)((unsigned char *)&&labelend - (unsigned char *)&&labelbegin));
You can find size of a binary code block like following (x86 target, gcc compiler):
#define CODE_BEGIN(name) __asm__ __volatile__ ("jmp ."name"_op_end\n\t""."name"_op_begin:\n\t")
#define CODE_END(name) __asm__ __volatile__ ("."name"_op_end:\n\t""movl $."name"_op_begin, %0\n\t""movl $."name"_op_end, %1\n\t":"=g"(begin), "=g"(end));
void func() {
unsigned int begin, end;
CODE_BEGIN("func");
/* ... your code goes here ... */
CODE_END("func");
printf("Code length is %d\n", end - begin);
}
This technique is used by "lazy JIT" compilers that copy binary code written in high level language (C), instead of emitting assembly binary codes, like normal JIT compilers do.
It's usually a job for a tracer or a debugger that includes a tracer, there are also external tools that can inspect and profile the memory usage of your program like valgrind.
Refer to the documentation of your IDE of choice or your compiler of choice.
To add on previous answers, there are occasions on which you can determine the size of the code pieces - look into the linker-generated map file, you should be able to see there global symbols and their size, including global functions. See here for more info: What's the use of .map files the linker produces? (for VC++, but the concept is similar for evey compiler).
Related
I have this code in C:
int main(void)
{
int a = 1 + 2;
return 0;
}
When I objdump -x86-asm-syntax=intel -d a.out which is compiled with -O0 flag with GCC 9.3.0_1, I get:
0000000100000f9e _main:
100000f9e: 55 push rbp
100000f9f: 48 89 e5 mov rbp, rsp
100000fa2: c7 45 fc 03 00 00 00 mov dword ptr [rbp - 4], 3
100000fa9: b8 00 00 00 00 mov eax, 0
100000fae: 5d pop rbp
100000faf: c3 ret
and with -O1 flag:
0000000100000fc2 _main:
100000fc2: b8 00 00 00 00 mov eax, 0
100000fc7: c3 ret
which removes the unused variable a and stack managing altogether.
However, when I use Apple clang version 11.0.3 with -O0 and -O1, I get
0000000100000fa0 _main:
100000fa0: 55 push rbp
100000fa1: 48 89 e5 mov rbp, rsp
100000fa4: 31 c0 xor eax, eax
100000fa6: c7 45 fc 00 00 00 00 mov dword ptr [rbp - 4], 0
100000fad: c7 45 f8 03 00 00 00 mov dword ptr [rbp - 8], 3
100000fb4: 5d pop rbp
100000fb5: c3 ret
and
0000000100000fb0 _main:
100000fb0: 55 push rbp
100000fb1: 48 89 e5 mov rbp, rsp
100000fb4: 31 c0 xor eax, eax
100000fb6: 5d pop rbp
100000fb7: c3 ret
respectively.
I never get the stack managing part stripped off as in GCC.
Why does (Apple) Clang keep unnecessary push and pop?
This may or may not be a separate question, but with the following code:
int main(void)
{
// return 0;
}
GCC creates a same ASM with or without the return 0;.
However, Clang -O0 leaves this extra
100000fa6: c7 45 fc 00 00 00 00 mov dword ptr [rbp - 4], 0
when there is return 0;.
Why does Clang keep these (probably) redundant ASM codes?
I suspect you were trying to see the addition happen.
int main(void)
{
int a = 1 + 2;
return 0;
}
but with optimization say -O2, your dead code went away
00000000 <main>:
0: 2000 movs r0, #0
2: 4770 bx lr
The variable a is local, it never leaves the function it does not rely on anything outside of the function (globals, input variables, return values from called functions, etc). So it has no functional purpose it is dead code it doesn't do anything so an optimizer is free to remove it and did.
So I assume you went to use no or less optimization and then saw it was too verbose.
00000000 <main>:
0: cf 93 push r28
2: df 93 push r29
4: 00 d0 rcall .+0 ; 0x6 <main+0x6>
6: cd b7 in r28, 0x3d ; 61
8: de b7 in r29, 0x3e ; 62
a: 83 e0 ldi r24, 0x03 ; 3
c: 90 e0 ldi r25, 0x00 ; 0
e: 9a 83 std Y+2, r25 ; 0x02
10: 89 83 std Y+1, r24 ; 0x01
12: 80 e0 ldi r24, 0x00 ; 0
14: 90 e0 ldi r25, 0x00 ; 0
16: 0f 90 pop r0
18: 0f 90 pop r0
1a: df 91 pop r29
1c: cf 91 pop r28
1e: 08 95 ret
If you want to see addition happen instead first off don't use main() it has baggage, and the baggage varies among toolchains. So try something else
unsigned int fun ( unsigned int a, unsigned int b )
{
return(a+b);
}
now the addition relies on external items so the compiler cannot optimize any of this away.
00000000 <_fun>:
0: 1d80 0002 mov 2(sp), r0
4: 6d80 0004 add 4(sp), r0
8: 0087 rts pc
If we want to figure out which one is a and which one is b then.
unsigned int fun ( unsigned int a, unsigned int b )
{
return(a+(b<<1));
}
00000000 <_fun>:
0: 1d80 0004 mov 4(sp), r0
4: 0cc0 asl r0
6: 6d80 0002 add 2(sp), r0
a: 0087 rts pc
Want to see an immediate value
unsigned int fun ( unsigned int a )
{
return(a+0x321);
}
00000000 <fun>:
0: 8b 44 24 04 mov eax,DWORD PTR [esp+0x4]
4: 05 21 03 00 00 add eax,0x321
9: c3 ret
you can figure out what the compilers return address convention is, etc.
But you will hit some limits trying to get the compiler to do things for you to learn asm likewise you can easily take the code generated by these compilations
(using -save-temps or -S or disassemble and type it in (I prefer the latter)) but you can only get so far on your operating system in high level/C callable functions. Eventually you will want to do something bare-metal (on a simulator at first) to get maximum freedom and to try instructions you cant normally try or try them in a way that is hard or you don't quite understand yet how to use in the confines of an operating system in a function call. (please do not use inline assembly until down the road or never, use real assembly and ideally the assembler not the compiler to assemble it, down the road then try those things).
The one compiler was built for or defaults to using a stack frame so you need to tell the compiler to omit it. -fomit-frame-pointer. Note that one or both of these can be built to default not to have a frame pointer.
../gcc-$GCCVER/configure --target=$TARGET --prefix=$PREFIX --without-headers --with-newlib --with-gnu-as --with-gnu-ld --enable-languages='c' --enable-frame-pointer=no
(Don't assume gcc nor clang/llvm have a "standard" build as they are both customizable and the binary you downloaded has someone's opinion of the standard build)
You are using main(), this has the return 0 or not thing and it can/will carry other baggage. Depends on the compiler and settings. Using something not main gives you the freedom to pick your inputs and outputs without it warning that you didn't conform to the short list of choices for main().
For gcc -O0 is ideally no optimization although sometimes you see some. -O3 is max give me all you got. -O2 is historically where folks live if for no other reason than "I did it because everyone else is doing it". -O1 is no mans land for gnu it has some items not in -O0 but not a lot of good ones in -O2, so depends heavily on your code as to whether or not you landed in one/some of the optimizations associated with -O1. These numbered optimization things if your compiler even has a -O option is just a pre-defined list 0 means this list 1 means that list and so on.
There is no reason to expect any two compilers or the same compiler with different options to produce the same code from the same sources. If two competing compilers were able to do that most if not all of the time something very fishy is going on...Likewise no reason to expect the list of optimizations each compiler supports, what each optimization does, etc, to match much less the -O1 list to match between them and so on.
There is no reason to assume that any two compilers or versions conform to the same calling convention for the same target, it is much more common now and further for the processor vendor to create a recommended calling convention and then the competing compilers to often conform to that because why not, everyone else is doing it, or even better, whew I don't have to figure one out myself, if this one fails I can blame them.
There are a lot of implementation defined areas in C in particular, less so in C++ but still...So your expectations of what come out and comparing compilers to each other may differ for this reason as well. Just because one compiler implements some code in some way doesn't mean that is how that language works sometimes it is how that compiler author(s) interpreted the language spec or had wiggle room.
Even with full optimizations enabled, everything that compiler has to offer there is no reason to assume that a compiler can outperform a human. Its an algorithm with limits programmed by a human, it cannot outperform us. With experience it is not hard to examine the output of a compiler for sometimes simple functions but often for larger functions and find missed optimizations, or other things that could have been done "better" for some opinion of "better". And sometimes you find the compiler just left something in that you think it should have removed, and sometimes you are right.
There is education as shown above in using a compiler to start to learn assembly language, and even with decades of experience and dabbling with dozens of assembly languages/instruction sets, if there is a debugged compiler available I will very often start with disassembling simple functions to start learning that new instruction set, then look those up then start to get a feel from what I find there for how to use it.
Very often starting with this one first:
unsigned int fun ( unsigned int a )
{
return(a+5);
}
or
unsigned int fun ( unsigned int a, unsigned int b )
{
return(a+b);
}
And going from there. Likewise when writing a disassembler or a simulator for fun to learn the instruction set I often rely on an existing assembler since it is often the documentation for a processor is lacking, the first assembler and compiler for that processor are very often done with direct access to the silicon folks and then those that follow can also use existing tools as well as documentation to figure things out.
So you are on a good path to start learning assembly language I have strong opinions on which ones to or not to start with to improve the experience and chances of success, but I have been in too many battles on Stack Overflow this week, I'll let that go. You can see that I chose an array of instruction sets in this answer. And even if you don't know them you can probably figure out what the code is doing. "standard" installs of llvm provide the ability to output assembly language for several instruction sets from the same source code. The gnu approach is you pick the target (family) when you compile the toolchain and that compiled toolchain is limited to that target/family but you can easily install several gnu toolchains on your computer at the same time be they variations on defaults/settings for the same target or different targets. A number of these are apt gettable without having to learn to build the tools, arm, avr, msp430, x86 and perhaps some others.
I cannot speak to the why does it not return zero from main when you didn't actually have any return code. See comments by others and read up on the specs for that language. (or ask that as a separate question, or see if it was already answered).
Now you said Apple clang not sure what that reference was to I know that Apple has put a lot of work into llvm in general. Or maybe you are on a mac or in an Apple supplied/suggested development environment, but check Wikipedia and others, clang had a lot of corporate help not just Apple, so not sure what the reference was there. If you are on an Apple computer then the apt gettable isn't going to make sense, but there are still lots of pre-built gnu (and llvm) based toolchains you can download and install rather than attempt to build the toolchain from sources (which isn't difficult BTW).
Is there a limit in nested calls of functions according to C99?
Example:
result = fn1( fn2( fn3( ... fnN(parN1, parN2) ... ), par2), par1);
NOTE: this code is definitely not a good practice because hard to manage; however, this code is generated automatically from a model, so manageability issues does not apply.
There is not directly a limitation, but a compiler is only required to allow some minimum limits for various categories:
From the C11 standard:
5.2.4.1 Translation limits
1 The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the following limits: 18)
...
63 nesting levels of parenthesized expressions within a full expression
...
4095 characters in a logical source line
18) Implementations should avoid imposing fixed translation limits whenever possible
No. There is no limit.
As a example, this is a C snippet:
int func1(int a){return a;}
int func2(int a){return a;}
int func3(int a){return a;}
void main()
{
func1(func2(func3(16)));
}
The corresponding assembly code is:
0000000000000024 <main>:
24: 55 push %rbp
25: 48 89 e5 mov %rsp,%rbp
28: bf 10 00 00 00 mov $0x10,%edi
2d: e8 00 00 00 00 callq 32 <main+0xe>
32: 89 c7 mov %eax,%edi
34: e8 00 00 00 00 callq 39 <main+0x15>
39: 89 c7 mov %eax,%edi
3b: e8 00 00 00 00 callq 40 <main+0x1c>
40: 90 nop
41: 5d pop %rbp
42: c3 retq
The %edi register stores the result of each function and the %eax register stores the argument. As you can see, there are three callq instructions which correspond to three function calls. In other words, these nested functions are called one by one. There is no need to worry about the stack.
As mentioned in comments, compiler may crash when the code nests too deep.
I write a simple Python script to test this.
nest = 64000
funcs=""
call=""
for i in range(1, nest+1):
funcs += "int func%d(int a){return a;}\n" %i
call += "func%d(" %i
call += str(1) # parameter
call += ")" * nest + ";" # right parenthesis
content = '''
%s
void main()
{
%s
}
''' %(funcs, call)
with open("test.c", "w") as fd:
fd.write(content)
nest = 64000 is OK, but 640000 will cause gcc-5.real: internal compiler error: Segmentation fault (program cc1).
No. Since these functions are executed one by one, there is no issue.
int res;
res = fnN(parN1, parN2);
....
res = fn2(res, par2);
res = fn1(res, par1);
The execution is linear with previous result being used for next function call.
Edit: As explained in comments, there might be a problem with parser and/or compiler to deal with such ugly code.
If this is not a purely theoretical question, the answer is probably "Try to rewrite your code so you don't need to do that, because the limit is more than enough for most sane use cases". If this is purely theoretical, or you really do need to worry about this limit and can't just rewrite, read on.
Section 5.2.4 of the C11 standard (latest draft, which is freely available and almost identical) specifies various limits on what implementations are required to support. If I'm reading that right, you can go up to 63 levels of nesting.
However, implementations are allowed to support more, and in practice they probably do. I had trouble finding the appropriate documentation for GCC (the closest I found was for expressions in the preprocessor), but I expect it doesn't have a hard limit except for system resources when compiling.
I know some C and a little bit of assembly and wanted to start learning about reverse engineering, so I downloaded the trial of Hopper Disassembler for Mac. I created a super basic C program:
int main() {
int a = 5;
return 0;
}
And compiled it with the -g flag (because I saw this before and wasn't sure if it mattered):
gcc -g simple.c
Then I opened the a.out file in Hopper Disassembler and clicked on the Pseudo Code button and it gave me:
int _main() {
rax = 0x0;
var_4 = 0x0;
var_8 = 0x5;
rsp = rsp + 0x8;
rbp = stack[2047];
return 0x0;
}
The only line I sort of understand here is setting a variable to 0x5. I'm unable to comprehend what all these additional lines are for (such as the rsp = rsp + 0x8;), for such a simple program. Would anyone be willing to explain this to me?
Also if anyone knows of good sources/tutorials for an intro into reverse engineering that'd be very helpful as well. Thanks.
Looks like it is doing a particularly poor job of producing "disassembly pseudocode" (whatever that is -- is it a disassembler or a decompliler? Can't decide)
In this case it looks like it has has elided the stack frame setup (the function prolog), but not the cleanup (function epilog). So you'll get a much better idea of what is going on by using an actual disassembler to look at the actual disassembly code:
$ gcc -c simple.c
$ objdump -d simple.o
simple.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <main>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: c7 45 fc 05 00 00 00 movl $0x5,-0x4(%rbp)
b: b8 00 00 00 00 mov $0x0,%eax
10: 5d pop %rbp
11: c3 retq
So what we have here is code to set up a stack frame (address 0-1), the assignment you have (4), setting up the return value (b), tearing down the frame (10) and then returning (11). You might see something different due to using a different version of gcc or a different target.
In the case of your disassembly, the first part has been elided (left out as being an uninteresting housekeeping task) by the disassembler, but the second to last part (which undoes the first part) has not.
What you're looking at is decompiled code. Every decompiler ouptutwill look something close to that because it's not going to try and get variable names because they can be changed so often and usually are.
So it will put them in a 'var_??' with a number attached to the end. Once you learn about reverse engineering and know the language you're programming in very well, you can understand the code. It's no different when you're trying to de-obfuscate PHP, JavaScript code, etc.
If you ever get into reverse engineering malware be prepared because nothing is going to be easy. You're going to have different packers, obfuscators, messed-up code, VM detection routines, etc. So buckle down and get ready for a long road ahead if reverse engineering is your goal.
The code:
#include <stdio.h>
int main(int argc, char *argv[])
{
//what happens?
10*10;
//what happens?
printf("%d", 10*10);
return 0;
}
What happens in memory/compilation in this two lines. Does it is stored? (10*10)
The statement
10*10;
has no effect. The compiler may choose to not generate any code at all for this statement. On the other hand,
printf("%d", 10*10);
passes the result of 10*10 to the printf function, which prints the result (100) to the standard output.
Ask your compiler! They'll probably all have an interesting answer.
Here's what gcc -c noop.c -o noop.o -g3 had to say (I ran the object code through objdump --disassemble --source to produce the output below):
#include <stdio.h>
void test_code()
{
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
10*10;
//what happens?
printf("%d", 10*10);
4: b8 00 00 00 00 mov $0x0,%eax
9: be 64 00 00 00 mov $0x64,%esi
e: 48 89 c7 mov %rax,%rdi
11: b8 00 00 00 00 mov $0x0,%eax
16: e8 00 00 00 00 callq 1b <test_code+0x1b>
}
1b: 5d pop %rbp
1c: c3 retq
My compiler took the 10*10 being passed to printf and multiplied it at compile time and then used the result as an immediate ($064, aka 100 in decimal) and put it into a register to be used for printf:
mov $0x64,%esi
The 10*10 expression not assigned to any identifier was elided. Note that it's likely possible to find some compiler somewhere that decides to execute this computation and store it in registers.
In first question nothing, an expression like that is converted to a value by the compiler, and as you are not assigning to a variable it does nothing, the compiler removes it.
In the second one the value 100 is passed to printf.
You must note that it depends on compiler what to does, in ones tu willl be preparsed, in others will be executed the operation.
10*10;
Not stored. My guess is that it should give a compiler warning or error.
printf("%d", 10*10);
Should print: 100. The value of (10*10) is calculated (most likely by the compiler, not at run-time), and then sent to printf() by pushing the value (100) onto the stack. Hence, the value is stored on the stack until the original (pre-call-to-printf()) stack frame is restored upon printf()'s return.
In the first case, since the operation is not used anywhere, the compiler may optimise your code and not execute the instruction at all.
In the second case, the value is calculated using registers (stack) and printed to the console, not stored anywhere else.
The C standard describes what the program does on an abstract machine.
But to really decide what actually happens, you need to always keep in mind one rule: The compiler must only output code with observable behavior if no constraint was violated "as if" it did what you said.
It is explicitly allowed to use any other way to achieve that result it favors.
This rule is known colloquially as the "as-if"-rule.
Thus, your program is equal to e.g:
#include <stdio.h>
int main(void) {
fputs("100", stdout);
}
Or
#include <stdio.h>
int main(void) {
putchar('1');
putchar('0');
putchar('0');
}
Most code I have ever read uses a int for standard error handling (return values from functions and such). But I am wondering if there is any benefit to be had from using a uint_8 will a compiler -- read: most C compilers on most architectures -- produce instructions using the immediate address mode -- i.e., embed the 1-byte integer into the instruction ? The key instruction I'm thinking about is the compare after a function, using uint_8 as its return type, returns.
I could be thinking about things incorrectly, as introducing a 1 byte type just causes alignment issues -- there is probably a perfectly sane reason why compiles like to pack things in 4-bytes and this is possibly the reason everyone just uses ints -- and since this is stack related issue rather than the heap there is no real overhead.
Doing the right thing is what I'm thinking about. But lets say say for the sake of argument this is a popular cheap microprocessor for a intelligent watch and that it is configured with 1k of memory but does have different addressing modes in its instruction set :D
Another question to slightly specialize the discussion (x86) would be: is the literal in:
uint_32 x=func(); x==1;
and
uint_8 x=func(); x==1;
the same type ? or will the compiler generate a 8-byte literal in the second case. If so it may use it to generate a compare instruction which has the literal as an immediate value and the returned int as a register reference. See CMP instruction types..
Another Refference for the x86 Instruction Set.
Here's what one particular compiler will do for the following code:
extern int foo(void) ;
void bar(void)
{
if(foo() == 31) { //error code 31
do_something();
} else {
do_somehing_else();
}
}
0: 55 push %ebp
1: 89 e5 mov %esp,%ebp
3: 83 ec 08 sub $0x8,%esp
6: e8 fc ff ff ff call 7 <bar+0x7>
b: 83 f8 1f cmp $0x1f,%eax
e: 74 08 je 18 <bar+0x18>
10: c9 leave
11: e9 fc ff ff ff jmp 12 <bar+0x12>
16: 89 f6 mov %esi,%esi
18: c9 leave
19: e9 fc ff ff ff jmp 1a <bar+0x1a>
a 3 byte instruction for the cmp. if foo() returns a char , we get
b: 3c 1f cmp $0x1f,%al
If you're looking for efficiency though. Don't assume comparing stuff in %a1 is faster than comparing with %eax
There may be very small speed differences between the different integral types on a particular architecture. But you can't rely on it, it may change if you move to different hardware, and it may even run slower if you upgrade to newer hardware.
And if you talk about x86 in the example you are giving, you make a false assumption: An immediate needs to be of type uint8_t.
Actually 8-bit immediates embedded into the instruction are of type int8_t and can be used with bytes, words, dwords and qwords, in C notation: char, short, int and long long.
So on this architecture there would be no benefit at all, neither code size nor execution speed.
You should use int or unsigned int types for your calculations. Using smaller types only for compounds (structs/arrays). The reason for that is that int is normally defined to be the "most natural" integral type for the processor, all other derived type may necessitate processing to work correctly. We had in our project compiled with gcc on Solaris for SPARC the case that accesses to 8 and 16 bit variable added an instruction to the code. When loading a smaller type from memory it had to make sure the upper part of the register was properly set (sign extension for signed type or 0 for unsigned). This made the code longer and increased pressure on the registers, which deteriorated the other optimisations.
I've got a concrete example:
I declared two variable of a struct as uint8_t and got that code in Sparc Asm:
if(p->BQ > p->AQ)
was translated in
ldub [%l1+165], %o5 ! <variable>.BQ,
ldub [%l1+166], %g5 ! <variable>.AQ,
and %o5, 0xff, %g4 ! <variable>.BQ, <variable>.BQ
and %g5, 0xff, %l0 ! <variable>.AQ, <variable>.AQ
cmp %g4, %l0 ! <variable>.BQ, <variable>.AQ
bleu,a,pt %icc, .LL586 !
And here what I got when I declared the two variables as uint_t
lduw [%l1+168], %g1 ! <variable>.BQ,
lduw [%l1+172], %g4 ! <variable>.AQ,
cmp %g1, %g4 ! <variable>.BQ, <variable>.AQ
bleu,a,pt %icc, .LL587 !
Two arithmetic operations less and 2 registers more for other stuff
Processors typically likes to work with their natural register sizes, which in C is 'int'.
Although there are exceptions, you're thinking too much on a problem that does not exist.