Limit for nested function calls in C (C99) - c

Is there a limit in nested calls of functions according to C99?
Example:
result = fn1( fn2( fn3( ... fnN(parN1, parN2) ... ), par2), par1);
NOTE: this code is definitely not a good practice because hard to manage; however, this code is generated automatically from a model, so manageability issues does not apply.

There is not directly a limitation, but a compiler is only required to allow some minimum limits for various categories:
From the C11 standard:
5.2.4.1 Translation limits
1 The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the following limits: 18)
...
63 nesting levels of parenthesized expressions within a full expression
...
4095 characters in a logical source line
18) Implementations should avoid imposing fixed translation limits whenever possible

No. There is no limit.
As a example, this is a C snippet:
int func1(int a){return a;}
int func2(int a){return a;}
int func3(int a){return a;}
void main()
{
func1(func2(func3(16)));
}
The corresponding assembly code is:
0000000000000024 <main>:
24: 55 push %rbp
25: 48 89 e5 mov %rsp,%rbp
28: bf 10 00 00 00 mov $0x10,%edi
2d: e8 00 00 00 00 callq 32 <main+0xe>
32: 89 c7 mov %eax,%edi
34: e8 00 00 00 00 callq 39 <main+0x15>
39: 89 c7 mov %eax,%edi
3b: e8 00 00 00 00 callq 40 <main+0x1c>
40: 90 nop
41: 5d pop %rbp
42: c3 retq
The %edi register stores the result of each function and the %eax register stores the argument. As you can see, there are three callq instructions which correspond to three function calls. In other words, these nested functions are called one by one. There is no need to worry about the stack.
As mentioned in comments, compiler may crash when the code nests too deep.
I write a simple Python script to test this.
nest = 64000
funcs=""
call=""
for i in range(1, nest+1):
funcs += "int func%d(int a){return a;}\n" %i
call += "func%d(" %i
call += str(1) # parameter
call += ")" * nest + ";" # right parenthesis
content = '''
%s
void main()
{
%s
}
''' %(funcs, call)
with open("test.c", "w") as fd:
fd.write(content)
nest = 64000 is OK, but 640000 will cause gcc-5.real: internal compiler error: Segmentation fault (program cc1).

No. Since these functions are executed one by one, there is no issue.
int res;
res = fnN(parN1, parN2);
....
res = fn2(res, par2);
res = fn1(res, par1);
The execution is linear with previous result being used for next function call.
Edit: As explained in comments, there might be a problem with parser and/or compiler to deal with such ugly code.

If this is not a purely theoretical question, the answer is probably "Try to rewrite your code so you don't need to do that, because the limit is more than enough for most sane use cases". If this is purely theoretical, or you really do need to worry about this limit and can't just rewrite, read on.
Section 5.2.4 of the C11 standard (latest draft, which is freely available and almost identical) specifies various limits on what implementations are required to support. If I'm reading that right, you can go up to 63 levels of nesting.
However, implementations are allowed to support more, and in practice they probably do. I had trouble finding the appropriate documentation for GCC (the closest I found was for expressions in the preprocessor), but I expect it doesn't have a hard limit except for system resources when compiling.

Related

Why assembly code is different for simple C program with different gcc version?

I'm understanding the basics of assembly and c programming.
I compiled following simple program in C,
#include <stdio.h>
int main()
{
int a;
int b;
a = 10;
b = 88
return 0;
}
Compiled with following command,
gcc -ggdb -fno-stack-protector test.c -o test
The disassembled code for above program with gcc version 4.4.7 is:
5 push %ebp
89 e5 mov %esp,%ebp
83 ec 10 sub $0x10,%esp
c7 45 f8 0a 00 00 00 movl $0xa,-0x8(%ebp)
c7 45 fc 58 00 00 00 movl $0x58,-0x4(%ebp)
b8 00 00 00 00 mov $0x0,%eax
c9 leave
c3 ret
90 nop
However disassembled code for same program with gcc version 4.3.3 is:
8d 4c 23 04 lea 0x4(%esp), %ecx
83 e4 f0 and $0xfffffff0, %esp
55 push -0x4(%ecx)
89 e5 mov %esp,%ebp
51 push %ecx
83 ec 10 sub $0x10,%esp
c7 45 f4 0a 00 00 00 00 movl $0xa, -0xc(%ebp)
c7 45 f8 58 00 00 00 00 movl $0x58, -0x8(%ebp)
b8 00 00 00 00 mov $0x0, %eax
83 c4 10 add $0x10,%esp
59 pop %ecx
5d pop %ebp
8d 61 fc lea -0x4(%ecx),%esp
c3 ret
Why there is difference in the assembly code?
As you can see in second assembled code, Why pushing %ecx on stack?
What is significance of and $0xfffffff0, %esp?
note: OS is same
Compilers are not required to produce identical assembly code for the same source code. The C standard allows the compiler to optimize the code as they see fit as long as the observable behaviour is the same. So, different compilers may generate different assembly code.
For your code, GCC 6.2 with -O3 generates just:
xor eax, eax
ret
because your code essentially does nothing. So, it's reduced to a simple return statement.
To give you some idea, how many ways exists to create valid code for particular task, I thought this example may help.
From time to time there are size coding competitions, obviously targetting Assembly programmers, as you can't compete with compiler against hand written assembly at this level at all.
The competition tasks are fairly trivial to make the entry level and total effort reasonable, with precise input and output specifications (down to single byte or pixel perfection).
So you have almost trivial exact task, human produced code (at the moment still outperforming compilers for trivial task), with single simple rule "minimal size" as a goal.
With your logic it's absolutely clear every competitor should produce the same result.
The real world answer to this is for example:
Hugi Size Coding Competition Series - Compo29 - Random Maze Builder
12 entries, size of code (in bytes): 122, 122, 128, 135, 136, 137, 147, ... 278 (!).
And I bet the first two entries, both having 122B are probably different enough (too lazy to actually check them).
Now producing valid machine code from high level programming language and by machine (compiler) is lot more complex task. And compilers can't compete with humans in reasoning, most of the "how good code is produced by c++ compiler" stems from C++ language itself being defined quite close to machine code (easy to compile) and from brute CPU power allowing the compilers to work on thousands of variants for particular code path, searching for near-optimal solution mostly by brute force.
Still the numerical "reasoning" behind the optimizers are state of art in their own way, getting to the point where human are still unreachable, but more like in their own way, just as humans can't achieve the efficiency of compilers within reasonable effort for full-sized app compilation.
At this point reasoning about some debug code being different in few helper prologue/epilogue instructions... Even if you would find difference in optimized code, and the difference being "obvious" to human, it's still quite a feat the compiler can produce at least that, as compiler has to apply universal rules on specific code, without truly understanding the context of task.

Segmentation fault when calling a function located in the heap

I'm trying to tweak the rules a little bit here, and malloc a buffer,
then copy a function to the buffer.
Calling the buffered function works, but the function throws a Segmentation fault when i'm trying to call another function within.
Any thoughts why?
#include <stdio.h>
#include <sys/mman.h>
#include <unistd.h>
#include <stdlib.h>
int foo(int x)
{
printf("%d\n", x);
}
int bar(int x)
{
}
int main()
{
int foo_size = bar - foo;
void* buf_ptr;
buf_ptr = malloc(1024);
memcpy(buf_ptr, foo, foo_size);
mprotect((void*)(((int)buf_ptr) & ~(sysconf(_SC_PAGE_SIZE) - 1)),
sysconf(_SC_PAGE_SIZE),
PROT_READ|PROT_WRITE|PROT_EXEC);
int (*ptr)(int) = buf_ptr;
printf("%d\n", ptr(3));
return 0;
}
This code will throw a segfault, unless i'll change the foo function to:
int foo(int x)
{
//Anything but calling another function.
x = 4;
return x;
}
NOTE:
The code successfully copies foo into the buffer, i know i made some assumptions, but on my platform they're ok.
Your code is not position independent and even if it were, you don't have the correct relocations to move it to an arbitrary position. Your call to printf (or any other function) will be done with pc-relative addressing (through the PLT, but that's besides the point here). This means that the instruction generated to call printf isn't a call to a static address but rather "call the function X bytes from the current instruction pointer". Since you moved the code the call is done to a bad address. (I'm assuming i386 or amd64 here, but generally it's a safe assumption, people who are on weird platforms usually mention that).
More specifically, x86 has two different instructions for function calls. One is a call relative to the instruction pointer which determines the destination of the function call by adding a value to the current instruction pointer. This is the most commonly used function call. The second instruction is a call to a pointer inside a register or memory location. This is much less commonly used by compilers because it requires more memory indirections and stalls the pipeline. The way shared libraries are implemented (your call to printf will actually go to a shared library) is that for every function call you make outside of your own code the compiler will insert fake functions near your code (this is the PLT I mentioned above). Your code does a normal pc-relative call to this fake function and the fake function will find the real address to printf and call that. It doesn't really matter though. Almost any normal function call you make will be pc-relative and will fail. Your only hope in code like this are function pointers.
You might also run into some restrictions on executable mprotect. Check the return value of mprotect, on my system your code doesn't work for one more reason: mprotect doesn't allow me to do this. Probably because the backend memory allocator of malloc has additional restrictions that prevents executable protections of its memory. Which leads me to the next point:
You will break things by calling mprotect on memory that isn't managed by you. That includes memory you got from malloc. You should only mprotect things you've gotten from the kernel yourself through mmap.
Here's a version that demonstrates how to make this work (on my system):
#include <stdio.h>
#include <sys/mman.h>
#include <unistd.h>
#include <string.h>
#include <err.h>
int
foo(int x, int (*fn)(const char *, ...))
{
fn("%d\n", x);
return 42;
}
int
bar(int x)
{
return 0;
}
int
main(int argc, char **argv)
{
size_t foo_size = (char *)bar - (char *)foo;
int ps = getpagesize();
void *buf_ptr = mmap(NULL, ps, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_ANON|MAP_PRIVATE, -1, 0);
if (buf_ptr == MAP_FAILED)
err(1, "mmap");
memcpy(buf_ptr, foo, foo_size);
int (*ptr)(int, int (*)(const char *, ...)) = buf_ptr;
printf("%d\n", ptr(3, printf));
return 0;
}
Here, I abuse the knowledge of how the compiler will generate the code for the function call. By using a function pointer I force it to generate a call instruction that isn't pc-relative. Also, I manage the memory allocation myself so that we get the right permissions from start and not run into any restrictions that brk might have. As a bonus we do error handling that actually helped me find a bug in the first version of this experiment and I also corrected other minor bugs (like missing includes) which allowed me to enable warnings in the compiler and catch another potential problem.
If you want to dig deeper into this you can do something like this. I added two versions of the function:
int
oldfoo(int x)
{
printf("%d\n", x);
return 42;
}
int
foo(int x, int (*fn)(const char *, ...))
{
fn("%d\n", x);
return 42;
}
Compile the whole thing and disassemble it:
$ cc -Wall -o foo foo.c
$ objdump -S foo | less
We can now look at the two generated functions:
0000000000400680 <oldfoo>:
400680: 55 push %rbp
400681: 48 89 e5 mov %rsp,%rbp
400684: 48 83 ec 10 sub $0x10,%rsp
400688: 89 7d fc mov %edi,-0x4(%rbp)
40068b: 8b 45 fc mov -0x4(%rbp),%eax
40068e: 89 c6 mov %eax,%esi
400690: bf 30 08 40 00 mov $0x400830,%edi
400695: b8 00 00 00 00 mov $0x0,%eax
40069a: e8 91 fe ff ff callq 400530 <printf#plt>
40069f: b8 2a 00 00 00 mov $0x2a,%eax
4006a4: c9 leaveq
4006a5: c3 retq
00000000004006a6 <foo>:
4006a6: 55 push %rbp
4006a7: 48 89 e5 mov %rsp,%rbp
4006aa: 48 83 ec 10 sub $0x10,%rsp
4006ae: 89 7d fc mov %edi,-0x4(%rbp)
4006b1: 48 89 75 f0 mov %rsi,-0x10(%rbp)
4006b5: 8b 45 fc mov -0x4(%rbp),%eax
4006b8: 48 8b 55 f0 mov -0x10(%rbp),%rdx
4006bc: 89 c6 mov %eax,%esi
4006be: bf 30 08 40 00 mov $0x400830,%edi
4006c3: b8 00 00 00 00 mov $0x0,%eax
4006c8: ff d2 callq *%rdx
4006ca: b8 2a 00 00 00 mov $0x2a,%eax
4006cf: c9 leaveq
4006d0: c3 retq
The instruction for the function call in the printf case is "e8 91 fe ff ff". This is a pc-relative function call. 0xfffffe91 bytes in front of our instruction pointer. It's treated as a signed 32 bit value, and the instruction pointer used in the calculation is the address of the next instruction. So 0x40069f (next instruction) - 0x16f (0xfffffe91 in front is 0x16f bytes behind with signed math) gives us the address 0x400530, and looking at the disassembled code I find this at the address:
0000000000400530 <printf#plt>:
400530: ff 25 ea 0a 20 00 jmpq *0x200aea(%rip) # 601020 <_GLOBAL_OFFSET_TABLE_+0x20>
400536: 68 01 00 00 00 pushq $0x1
40053b: e9 d0 ff ff ff jmpq 400510 <_init+0x28>
This is the magic "fake function" I mentioned earlier. Let's not get into how this works. It's necessary for shared libraries to work and that's all we need to know for now.
The second function generates the function call instruction "ff d2". This means "call the function at the address stored inside the rdx register". No pc-relative addressing and that's why it works.
The compiler is free to generate the code the way it wants provided the observable results are correct (as if rule). So what you do is just an undefined behaviour invocation.
Visual Studio sometimes uses relays. That means that the address of a function just points to a relative jump. That's perfectly allowed per standard because of the as is rule but it would definitely break that kind of construction. Another possibility is to have local internal functions called with relative jumps but outside of the function itself. In that case, your code would not copy them, and the relative calls will just point to random memory. That means that with different compilers (or even different compilation options on same compiler) it could give expected result, crash, or directly end the program without error which is exactly UB.
I think I can explain a bit. First of all, if both your functions have no return statement within, an undefined behaviour is invoked as per standard ยง6.9.1/12. Secondly, which is most common on a lot of platforms, and yours apparently as well, is the following: relative addresses of functions are hardcoded into binary code of functions. That means, that if you have a call of "printf" within "foo" and then you move (e.g. execute) from another location, that address, from which "printf" should be called, turns bad.

What happens to this in memory/compilation?

The code:
#include <stdio.h>
int main(int argc, char *argv[])
{
//what happens?
10*10;
//what happens?
printf("%d", 10*10);
return 0;
}
What happens in memory/compilation in this two lines. Does it is stored? (10*10)
The statement
10*10;
has no effect. The compiler may choose to not generate any code at all for this statement. On the other hand,
printf("%d", 10*10);
passes the result of 10*10 to the printf function, which prints the result (100) to the standard output.
Ask your compiler! They'll probably all have an interesting answer.
Here's what gcc -c noop.c -o noop.o -g3 had to say (I ran the object code through objdump --disassemble --source to produce the output below):
#include <stdio.h>
void test_code()
{
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
10*10;
//what happens?
printf("%d", 10*10);
4: b8 00 00 00 00 mov $0x0,%eax
9: be 64 00 00 00 mov $0x64,%esi
e: 48 89 c7 mov %rax,%rdi
11: b8 00 00 00 00 mov $0x0,%eax
16: e8 00 00 00 00 callq 1b <test_code+0x1b>
}
1b: 5d pop %rbp
1c: c3 retq
My compiler took the 10*10 being passed to printf and multiplied it at compile time and then used the result as an immediate ($064, aka 100 in decimal) and put it into a register to be used for printf:
mov $0x64,%esi
The 10*10 expression not assigned to any identifier was elided. Note that it's likely possible to find some compiler somewhere that decides to execute this computation and store it in registers.
In first question nothing, an expression like that is converted to a value by the compiler, and as you are not assigning to a variable it does nothing, the compiler removes it.
In the second one the value 100 is passed to printf.
You must note that it depends on compiler what to does, in ones tu willl be preparsed, in others will be executed the operation.
10*10;
Not stored. My guess is that it should give a compiler warning or error.
printf("%d", 10*10);
Should print: 100. The value of (10*10) is calculated (most likely by the compiler, not at run-time), and then sent to printf() by pushing the value (100) onto the stack. Hence, the value is stored on the stack until the original (pre-call-to-printf()) stack frame is restored upon printf()'s return.
In the first case, since the operation is not used anywhere, the compiler may optimise your code and not execute the instruction at all.
In the second case, the value is calculated using registers (stack) and printed to the console, not stored anywhere else.
The C standard describes what the program does on an abstract machine.
But to really decide what actually happens, you need to always keep in mind one rule: The compiler must only output code with observable behavior if no constraint was violated "as if" it did what you said.
It is explicitly allowed to use any other way to achieve that result it favors.
This rule is known colloquially as the "as-if"-rule.
Thus, your program is equal to e.g:
#include <stdio.h>
int main(void) {
fputs("100", stdout);
}
Or
#include <stdio.h>
int main(void) {
putchar('1');
putchar('0');
putchar('0');
}

why does compiler store variables in register? [duplicate]

This question already exists:
Why are registers needed (why not only use memory)? [duplicate]
Closed 1 year ago.
Hi I have been reading this kind of stuff in various docs
register
Tells the compiler to store the variable being declared in a CPU register.
In standard C dialects, keyword register uses the following syntax:
register data-definition;
The register type modifier tells the compiler to store the variable being declared in a CPU register (if possible), to optimize access. For example,
register int i;
Note that TIGCC will automatically store often used variables in CPU registers when the optimization is turned on, but the keyword register will force storing in registers even if the optimization is turned off. However, the request for storing data in registers may be denied, if the compiler concludes that there is not enough free registers for use at this place.
http://tigcc.ticalc.org/doc/keywords.html#register
My point is not only about register. My point is why would a compiler stores the variables in memory. The compiler business is to just compile and to generate an object file. At run time the actual memory allocation happens. why would compiler does this business. I mean without running the object file just by compiling the file itself does the memory allocation happens in case of C?
The compiler is generating machine code, and the machine code is used to run your program. The compiler decides what machine code it generates, therefore making decisions about what sort of allocation will happen at runtime. It's not executing them when you type gcc foo.c but later, when you run the executable, it's the code GCC generated that's running.
This means that the compiler wants to generate the fastest code possible and makes as many decisions as it can at compile time, this includes how to allocate things.
The compiler doesn't run the code (unless it does a few rounds for profiling and better code execution), but it has to prepare it - this includes how to keep the variables your program defines, whether to use fast and efficient storage as registers, or using the slower (and more prone to side effects) memory.
Initially, your local variables would simply be assigned location on the stack frame (except of course for memory you explicitly use dynamic allocation for). If your function assigned an int, your compiler would likely tell the stack to grow by a few additional bytes and use that memory address for storing that variable and passing it as operand to any operation your code is doing on that variable.
However, since memory is slower (even when cached), and manipulating it causes more restrictions on the CPU, at a later stage the compiler may decide to try moving some variables into registers. This allocation is done through a complicated algorithm that tries to select the most reused and latency critical variables that can fit within the existing number of logical registers your architecture has (While confirming with various restrictions such as some instructions requiring the operand to be in this or that register).
There's another complication - some memory addresses may alias with external pointers in manners unknown at compilation time, in which case you can not move them into registers. Compilers are usually a very cautious bunch and most of them would avoid dangerous optimizations (otherwise they're need to put up some special checks to avoid nasty things).
After all that, the compiler is still polite enough to let you advise which variable it important and critical to you, in case he missed it, and by marking these with the register keyword you're basically asking him to make an attempt to optimize for this variable by using a register for it, given enough registers are available and that no aliasing is possible.
Here's a little example: Take the following code, doing the same thing twice but with slightly different circumstances:
#include "stdio.h"
int j;
int main() {
int i;
for (i = 0; i < 100; ++i) {
printf ("i'm here to prevent the loop from being optimized\n");
}
for (j = 0; j < 100; ++j) {
printf ("me too\n");
}
}
Note that i is local, j is global (and therefore the compiler doesn't know if anyone else might access him during the run).
Compiling in gcc with -O3 produces the following code for main:
0000000000400540 <main>:
400540: 53 push %rbx
400541: bf 88 06 40 00 mov $0x400688,%edi
400546: bb 01 00 00 00 mov $0x1,%ebx
40054b: e8 18 ff ff ff callq 400468 <puts#plt>
400550: bf 88 06 40 00 mov $0x400688,%edi
400555: 83 c3 01 add $0x1,%ebx # <-- i++
400558: e8 0b ff ff ff callq 400468 <puts#plt>
40055d: 83 fb 64 cmp $0x64,%ebx
400560: 75 ee jne 400550 <main+0x10>
400562: c7 05 80 04 10 00 00 movl $0x0,1049728(%rip) # 5009ec <j>
400569: 00 00 00
40056c: bf c0 06 40 00 mov $0x4006c0,%edi
400571: e8 f2 fe ff ff callq 400468 <puts#plt>
400576: 8b 05 70 04 10 00 mov 1049712(%rip),%eax # 5009ec <j> (loads j)
40057c: 83 c0 01 add $0x1,%eax # <-- j++
40057f: 83 f8 63 cmp $0x63,%eax
400582: 89 05 64 04 10 00 mov %eax,1049700(%rip) # 5009ec <j> (stores j back)
400588: 7e e2 jle 40056c <main+0x2c>
40058a: 5b pop %rbx
40058b: c3 retq
As you can see, the first loop counter sits in ebx, and is incremented on each iteration and compared against the limit.
The second loop however was the dangerous one, and gcc decided to pass the index counter through memory (loading it into rax every iteration). This example serves to show how better off you'd be when using registers, as well as how sometimes you can't.
The compiler needs to translate the code into machine instruction, and tell the computer how to run the code. That include how to make operations (like multiply two numbers) and how to store the data (stack, heap or register).

What's the point of "unlikely()"? [duplicate]

I've been digging through some parts of the Linux kernel, and found calls like this:
if (unlikely(fd < 0))
{
/* Do something */
}
or
if (likely(!err))
{
/* Do something */
}
I've found the definition of them:
#define likely(x) __builtin_expect((x),1)
#define unlikely(x) __builtin_expect((x),0)
I know that they are for optimization, but how do they work? And how much performance/size decrease can be expected from using them? And is it worth the hassle (and losing the portability probably) at least in bottleneck code (in userspace, of course).
They are hint to the compiler to emit instructions that will cause branch prediction to favour the "likely" side of a jump instruction. This can be a big win, if the prediction is correct it means that the jump instruction is basically free and will take zero cycles. On the other hand if the prediction is wrong, then it means the processor pipeline needs to be flushed and it can cost several cycles. So long as the prediction is correct most of the time, this will tend to be good for performance.
Like all such performance optimisations you should only do it after extensive profiling to ensure the code really is in a bottleneck, and probably given the micro nature, that it is being run in a tight loop. Generally the Linux developers are pretty experienced so I would imagine they would have done that. They don't really care too much about portability as they only target gcc, and they have a very close idea of the assembly they want it to generate.
Let's decompile to see what GCC 4.8 does with it
Without __builtin_expect
#include "stdio.h"
#include "time.h"
int main() {
/* Use time to prevent it from being optimized away. */
int i = !time(NULL);
if (i)
printf("%d\n", i);
puts("a");
return 0;
}
Compile and decompile with GCC 4.8.2 x86_64 Linux:
gcc -c -O3 -std=gnu11 main.c
objdump -dr main.o
Output:
0000000000000000 <main>:
0: 48 83 ec 08 sub $0x8,%rsp
4: 31 ff xor %edi,%edi
6: e8 00 00 00 00 callq b <main+0xb>
7: R_X86_64_PC32 time-0x4
b: 48 85 c0 test %rax,%rax
e: 75 14 jne 24 <main+0x24>
10: ba 01 00 00 00 mov $0x1,%edx
15: be 00 00 00 00 mov $0x0,%esi
16: R_X86_64_32 .rodata.str1.1
1a: bf 01 00 00 00 mov $0x1,%edi
1f: e8 00 00 00 00 callq 24 <main+0x24>
20: R_X86_64_PC32 __printf_chk-0x4
24: bf 00 00 00 00 mov $0x0,%edi
25: R_X86_64_32 .rodata.str1.1+0x4
29: e8 00 00 00 00 callq 2e <main+0x2e>
2a: R_X86_64_PC32 puts-0x4
2e: 31 c0 xor %eax,%eax
30: 48 83 c4 08 add $0x8,%rsp
34: c3 retq
The instruction order in memory was unchanged: first the printf and then puts and the retq return.
With __builtin_expect
Now replace if (i) with:
if (__builtin_expect(i, 0))
and we get:
0000000000000000 <main>:
0: 48 83 ec 08 sub $0x8,%rsp
4: 31 ff xor %edi,%edi
6: e8 00 00 00 00 callq b <main+0xb>
7: R_X86_64_PC32 time-0x4
b: 48 85 c0 test %rax,%rax
e: 74 11 je 21 <main+0x21>
10: bf 00 00 00 00 mov $0x0,%edi
11: R_X86_64_32 .rodata.str1.1+0x4
15: e8 00 00 00 00 callq 1a <main+0x1a>
16: R_X86_64_PC32 puts-0x4
1a: 31 c0 xor %eax,%eax
1c: 48 83 c4 08 add $0x8,%rsp
20: c3 retq
21: ba 01 00 00 00 mov $0x1,%edx
26: be 00 00 00 00 mov $0x0,%esi
27: R_X86_64_32 .rodata.str1.1
2b: bf 01 00 00 00 mov $0x1,%edi
30: e8 00 00 00 00 callq 35 <main+0x35>
31: R_X86_64_PC32 __printf_chk-0x4
35: eb d9 jmp 10 <main+0x10>
The printf (compiled to __printf_chk) was moved to the very end of the function, after puts and the return to improve branch prediction as mentioned by other answers.
So it is basically the same as:
int main() {
int i = !time(NULL);
if (i)
goto printf;
puts:
puts("a");
return 0;
printf:
printf("%d\n", i);
goto puts;
}
This optimization was not done with -O0.
But good luck on writing an example that runs faster with __builtin_expect than without, CPUs are really smart these days. My naive attempts are here.
C++20 [[likely]] and [[unlikely]]
C++20 has standardized those C++ built-ins: How to use C++20's likely/unlikely attribute in if-else statement They will likely (a pun!) do the same thing.
These are macros that give hints to the compiler about which way a branch may go. The macros expand to GCC specific extensions, if they're available.
GCC uses these to to optimize for branch prediction. For example, if you have something like the following
if (unlikely(x)) {
dosomething();
}
return x;
Then it can restructure this code to be something more like:
if (!x) {
return x;
}
dosomething();
return x;
The benefit of this is that when the processor takes a branch the first time, there is significant overhead, because it may have been speculatively loading and executing code further ahead. When it determines it will take the branch, then it has to invalidate that, and start at the branch target.
Most modern processors now have some sort of branch prediction, but that only assists when you've been through the branch before, and the branch is still in the branch prediction cache.
There are a number of other strategies that the compiler and processor can use in these scenarios. You can find more details on how branch predictors work at Wikipedia: http://en.wikipedia.org/wiki/Branch_predictor
They cause the compiler to emit the appropriate branch hints where the hardware supports them. This usually just means twiddling a few bits in the instruction opcode, so code size will not change. The CPU will start fetching instructions from the predicted location, and flush the pipeline and start over if that turns out to be wrong when the branch is reached; in the case where the hint is correct, this will make the branch much faster - precisely how much faster will depend on the hardware; and how much this affects the performance of the code will depend on what proportion of the time hint is correct.
For instance, on a PowerPC CPU an unhinted branch might take 16 cycles, a correctly hinted one 8 and an incorrectly hinted one 24. In innermost loops good hinting can make an enormous difference.
Portability isn't really an issue - presumably the definition is in a per-platform header; you can simply define "likely" and "unlikely" to nothing for platforms that do not support static branch hints.
long __builtin_expect(long EXP, long C);
This construct tells the compiler that the expression EXP
most likely will have the value C. The return value is EXP.
__builtin_expect is meant to be used in an conditional
expression. In almost all cases will it be used in the
context of boolean expressions in which case it is much
more convenient to define two helper macros:
#define unlikely(expr) __builtin_expect(!!(expr), 0)
#define likely(expr) __builtin_expect(!!(expr), 1)
These macros can then be used as in
if (likely(a > 1))
Reference: https://www.akkadia.org/drepper/cpumemory.pdf
(general comment - other answers cover the details)
There's no reason that you should lose portability by using them.
You always have the option of creating a simple nil-effect "inline" or macro that will allow you to compile on other platforms with other compilers.
You just won't get the benefit of the optimization if you're on other platforms.
As per the comment by Cody, this has nothing to do with Linux, but is a hint to the compiler. What happens will depend on the architecture and compiler version.
This particular feature in Linux is somewhat mis-used in drivers. As osgx points out in semantics of hot attribute, any hot or cold function called with in a block can automatically hint that the condition is likely or not. For instance, dump_stack() is marked cold so this is redundant,
if(unlikely(err)) {
printk("Driver error found. %d\n", err);
dump_stack();
}
Future versions of gcc may selectively inline a function based on these hints. There have also been suggestions that it is not boolean, but a score as in most likely, etc. Generally, it should be preferred to use some alternate mechanism like cold. There is no reason to use it in any place but hot paths. What a compiler will do on one architecture can be completely different on another.
In many linux release, you can find compiler.h in /usr/linux/ , you can include it for use simply. And another opinion, unlikely() is more useful rather than likely(), because
if ( likely( ... ) ) {
doSomething();
}
it can be optimized as well in many compiler.
And by the way, if you want to observe the detail behavior of the code, you can do simply as follow:
gcc -c test.c
objdump -d test.o > obj.s
Then, open obj.s, you can find the answer.
They're hints to the compiler to generate the hint prefixes on branches. On x86/x64, they take up one byte, so you'll get at most a one-byte increase for each branch. As for performance, it entirely depends on the application -- in most cases, the branch predictor on the processor will ignore them, these days.
Edit: Forgot about one place they can actually really help with. It can allow the compiler to reorder the control-flow graph to reduce the number of branches taken for the 'likely' path. This can have a marked improvement in loops where you're checking multiple exit cases.
These are GCC functions for the programmer to give a hint to the compiler about what the most likely branch condition will be in a given expression. This allows the compiler to build the branch instructions so that the most common case takes the fewest number of instructions to execute.
How the branch instructions are built are dependent upon the processor architecture.
I am wondering why its not defined like this:
#define likely(x) __builtin_expect(((x) != 0),1)
#define unlikely(x) __builtin_expect((x),0)
I mean __builtin_expect docs say that the compiler will expect the first parameter having the value of the second one (and first param is returned), but the original way the macro is defined above makes it hard to use this for things that might return non-zero values as "true" values (instead of value 1).
This might be buggy - but from my code you get the idea what direction I mean..

Resources