Optimizations such as constant propagation are possible across functions within the same compilation unit (ie. same file).
For example :
int f(int x)
{
return 3 + x;
}
int main(void)
{
printf("%d\n", 1 + f(4));
return 0;
}
In that example, I think that a sufficiently smart compiler can propagate the '4' constant to the function 'f', solving the integer arithmetic with
the other constant '3', and propagates back the result value thus folding everything to the final value '8'.
(Well, correct me if I'm wrong..)
However, what is happening if the function 'f' is in another compilation unit. Since they both units are compiled separately, the compiler can't
optimize that way.
Does it mean that optimizations are only possible within the same compilation unit, or is there some form late optimizations performed of link-time?
Both MSVC (since 8.0: VS2005) and GCC (since 4.5) support the concept.
MSVC uses a compiler switch /GL and linker switch /LTCG. Documentation
GCC must have it enabled and uses the -flto, -fwhole-program, -fwhopr, and/or -combine to the same effect. Documentation (search for the options in your browser)
The "problem" is that every compilation unit (source file) (and in the case of MSVC every library) needs to be compiled with this, so you can't use old binary object files compiled without it. It also makes debugging harder, because the optimizer is a lot more aggressive and unpredictable.
Clang compiles to LLVM IR, and the LLVM linker performs whole-program optimization when it produces a native binary.
Microsoft Visual Studio supports WPO (whole program optimization) enabled by ltcg switch (link-time code generation).
It causes several other problems which I don't remember right now, and is preferred off by many developers.
Yes, for the Visual C++ compiler in Visual Studio, this is known as Whole Program Optimization:
Whole program optimization allows the
compiler to perform optimizations with
information on all modules in the
program. Without whole program
optimization, optimizations are
performed on a per module (compiland)
basis
With information on all modules, the
compiler can:
Optimize the use of registers across
function boundaries.
Do a better job of tracking
modifications to global data, allowing
a reduction in the number of loads and
stores.
Do a better job of tracking the
possible set of items modified by a
pointer dereference, reducing the
numbers of loads and stores.
Inline a function in a module even
when the function is defined in
another module.
GCC 4.5 introduced link-time optimization. AFAIK, it only works on x86 and x64 targets.
Related
Sometimes I need some code to be executed by the CPU exactly as I put it in the source. But any C compiler has it's optimization algorithms so I can expect some tricks. For example:
unsigned char flag=0;
interrupt ADC_ISR(){
ADC_result = ADCH;
flag = 1;
}
void main(){
while(!flag);
echo ADC_result;
}
Some compilers will definitely make while(!flag); loop infinitive as it will suppose flag equals to false (!flag is therefore always true).
Sometimes I can use volatile keyword. And sometimes it can help. But actually in my case (AVR GCC) volatile keyword forces compiler to locate the variable into SRAM instead of registers (which is bad for some reasons). Moreover many articles in the Internet suggesting to use volatile keyword with a big care as the result can become unstable (depending on a compiler, its optimization settings, platform and so on).
So I would definitely prefer to somehow point out the source code instruction and tell to the compiler that this code should be compiled exactly as it is. Like this: volatile while(!flag);
Is there any standard C instruction to do this?
The only standard C way is volatile. If that doesn't happen to do exactly what you want, you'll need to use something specific for your platform.
You should indeed use volatile as answered by David Schwartz. See also this chapter of GCC documentation.
If you use a recent GCC compiler, you could disable optimizations in a single function by using appropriate function specific options pragmas (or some optimize function attribute), for instance
#pragma GCC optimize ("-O0");
before your main. I'm not sure it is a good idea.
Perhaps you want extended asm statements with the volatile keyword.
You have several options:
Compile without optimisations. Unlike some compilers, GCC doesn't optimise by default so unless you tell it to optimise, you should get generated code which looks very similar to your C source. Of course you can choose to optimise some C files and not others, using simple make rules.
Take the compiler out of the equation and write the relevant functions in assembly. Then you can get exactly the generated code you want.
Use volatile, which prevents the compiler from making any assumptions about a certain variable, so for any use of the variable in C the compiler is forced to generate a LOAD or a STORE even if ostensibly unnecessary.
I am trying to sort out an embedded project where the developers took the option of including all the h and c files into a c file, then they can compile just that one file with the -whole-program option to get good size optimization.
I hate this and am determined to make this into a traditional program just using LTO to achieve the same.
The versions included with the dev kit are;
aps-gcc (GCC) 4.7.3 20130524 (Cortus)
GNU ld (GNU Binutils) 2.22
With one .o file .text is 0x1c7ac, fractured into 67 .o files .text comes out as 0x2f73c, I added the LTO stuff and reduced it to 0x20a44, good but nowhere near enough.
I have tried --gc-sections and using the linker plugin option but they made no further improvment.
Any suggestions, am I see the right sort of improvement from LTO?
To get LTO to work perfectly you need to have the same information and optimisation algorithms available at link stage as you have at compile stage. The GNU tools cannot do this and I believe this was actually one of the motivating factors in the creation of LLVM/Clang.
If you want to inspect the difference in detail I'd suggest you generate a Map file (ld option -Map <filename>) for each option and see if there are functions which haven't been in-lined or functions that are larger. The lack of in-lining you can manually resolve by forcing those functions to inline by moving the definition of the function into a header file and defining it as extern inline which effectively turns it into a macro (this is a GNU extension).
Larger functions are likely not being subject to constant propagation and I don't think there's anything you can do about that. You can make some improvements by carefully declaring the function attributes such as const, leaf, noreturn, pure, and returns_nonnull. These effectively promise that the function will behave in a particular way that the compiler may otherwise detect if using a single compilation unit, and that allow additional optimisations.
In contrast, Clang can compile your object code to a special kind of bytecode (LLVM stands for Low Level Virtual Machine, like JVM is Java Virtual Machine, and runs bytecode) and then optimisation of this bytecode can be performed at link time (or indeed run-time, which is cool). Since this bytecode is what is optimised whether you do LTO or not, and the optimisation algorithms are common between the compiler and the linker, in theory Clang/LLVM should give exactly the same results whether you use LTO or not.
Unfortunately now that the C backend has been removed from LLVM I don't know of any way to use the LLVM LTO capabilities for the custom CPU you're targeting.
In my opinion, the method chosen by the previous developers is the correct one. It is the method that gives the compiler the most information and thus the most opportunities to perform the optimizations that you want. It is a terrible way to compile (any change will require the whole project to be compiled) so marking this as just an option is a good idea.
Of course, you would have to run all your integration tests against such a build, but that should be trivial to do. What is the downside of the chosen approach except for compilation time (which shouldn't be an issue because you don't need to build in that manner all the time ... just for integration tests).
I have a question about how compiler-set symbols, in particular CPU feature flags (like SSE, AES, AVX) are actually set. For instance, if I call gcc with -mavx, is the __AVX__ symbol set regardless of whether the system the code is about to be built on actually supports AVX instructions, or does it check before?
I'm asking because I need to build a particular code path depending on CPU capabilities and would like to automate it so that the correct path is determined upon compilation based on the build system, instead of manually enabling desired features. But since the only CPU I have supports basically every feature, I cannot test my above assumption (first world problems, I know)
There is going to be a lot of code so simply keeping everything and branching at runtime is unacceptable - and it is assumed that my library will be built before being used on a given system anyway.
I mean, at worst I can force this behavior by wrapping the gcc arguments in a cpuid-aware script, but if gcc does it automatically it would be preferable. So does anyone know whether it does?
I am mostly interested in gcc's take on this but I am also curious to know how other C compilers behave.
If you pass the -mavx flag, __AVX__ will always be set for the resulting compilation (and the resulting code may not run on non-AVX machines).
If you pass the -march=native flag, gcc will enable the instruction sets supported by the build machine, so __AVX__ will only be set if the build machine supports it.
If I want to achieve better performance from, let's say for example, MySQLdb, I can compile it myself and I will get better performance because it's not compiled on i386, i486 or what ever, just on my CPU. Further I can choose the compile options and so on...
Now, I was wondering if this is true also for non-regular Software, such as compiler.
Here come the 1st part:
Will compiling a compiler like GCC result in better performance?
and the 2nd part:
Will the code compiled by my own compiled compiler perform better?
(Yes, I know, I can compile my compiler and benchmark it... but maybe ... someone already knows the answer, and will share it with us =)
In answer to your first question, almost certainly yes. Binary versions of gcc will be the "lowest common denominator" and, if you compile them with special flags more appropriate to your system, it will most likely be faster.
As to your second question, no.
The output of the compiler will be the same regardless of how you've optimised it (unless it's buggy, of course).
In other words, even if you totally stuffed up your compiler flags when compiling gcc, to the point where your particular compiled version of gcc takes a week and a half to compile "Hello World", the actual "Hello World" executable should be identical to the one produced by the "lowest common denominator" gcc (if you use the same flags).
(1) It is possible. If you introduce a new optimization to your compiler, and re-compile it with this optimization included - it is possible that the re-compiled code will perform better.
(2) No!!!! A compiler cannot change the logic of the code! In your case, the logic of the code is the native code produced at the end. So, if compiler A_1 is compiled using compiler A_2 or B, has no affect on the native code produced by A_1 [in here A_1, A_2 are the same compilers, the index is just for clarity].
a.Well, you can compile the compiler to your system, and maybe it will run faster. like any program. (I think that usualy it's not worth it, but do whatever you want).
b. No. Even if you compile the compiler in your computer, it's behavior should not change, and so the code that it generates also doesn't change.
Will compiling a compiler like GCC result in better performance?
A program compiled specifically to the target platform it is used on will usually perform better than a program compiled for a generic platform. Why is this? Knowledge about the harware can help the compiler align data to be cache friendly and choose an instruction ordering that plays well with a CPUs pipelining.
The most benefit is usally achieved by leveraging specific instruction sets such as SSE (in its various versions).
On the other hand, you should ask yourself if a programm like GCC is really CPU bound (much more likely it will be IO bound) and tuning its CPU performance provides any measurable benefit.
Will the code compiled by my own compiled compiler perform better
Hopefully not! Allowing a compiler to optimize a program should never change its behavior. No matter how you compiled your GCC, it should compile code to the same binaries as a generic binary distribution of GCC would.
If code compiled to the specific platform is faster than code compil for a generic platform, why dont we all ship code instead of binaries? Guess what, some linux distros actually follow this phillosophy, such as Gentoo. And while you're at it, make sure to built statically linked binaries, disk space is so cheap nowadays and it gives you at least another 0.001% of performance.
Alright, that was a bit sarcastic. The reason people distribute generic binaries is pretty obvious: It's geneirc, the lowest common denominator and it will work everywhere. Thats a big bonus in terms of flexibility and user friendlyness. I remember once compiling Gnome for my Gentoo box, it took a day or two! (But it must have been so much faster ;-) )
On the other hand, there are occassions where you want to get the best performance possible and it makes sense to build and optimize for specific architctures.
GCC uses a three step bootstraping when building from source. Basically it compiles the source three times to ensure build tools and compiler is build successfully. This bootstraping is used for validation purpose. However it is possible to use the stage 1 as a benchmark for optimizing later stages. You should build GCC with make profiledbootstrap to use this profile based optimization.
This profile based build process increases the performance of "GCC", but not the software compiled with it, as other answers point out.
How much effect does having multiple files or compiled libraries vs. throwing everything (>10,000 LOC) into one source have on the final binary? For example, instead of linking a Boost library separately, I paste its code, along with my original source, into one giant file for compilation. And along the same line, instead of feeding several files into gcc, pasting them all together, and giving only that one file.
I'm interested in the optimization differences, instead of problems (horror) that would come with maintaining a single source file of gargantuan proportions.
Granted, there can only be link-time optimization (I may be wrong), but is there a lot of difference between optimization possibilities?
If the compiler can see all source code, it can optimize better if your compiler has some kind of Interprocedural Optimization (IPO) option turned on. IPO differs from other compiler optimization because it analyzes the entire program; other optimizations look at only a single function, or even a single block of code
Here is some interprocedural optimization that can be done, see here for more:
Inlining
Constant propagation
mod/ref analysis
Alias analysis
Forward substitution
Routine key-attribute propagation
Partial dead call elimination
Symbol table data promotion
Dead function elimination
Whole program analysis
GCC supports this kind of optimization.
This interprocedural optimization can be used to analyze and optimize the function being called.
If compiler can not see the source code of the library function, it cannot do such optimization.
Note that some modern compilers (clang/LLVM, icc and recently even gcc) now support link-time-optimization (LTO) to minimize the effect of separate compilation. Thus you gain the benefits of separate compilation (maintenance, faster compilation, etc.) and these of whole program analysis.
By the way, it seems like gcc has supported -fwhole-program and --combine since version 4.1. You have to pass all source files together, though.
Finally, since BOOST is mostly header files (templates) that are #included, you cannot gain anything from adding these to your source code.