I wonder if there is any benefit for using the -S GCC option in my Makefiles.
I've been compiling C files like the following for quite some time now:
gcc -c a.c -o a.o
gcc -c b.c -o b.o
---
gcc a.o b.o -o a.out
Now would it be better going:
gcc -S a.c -o a.s
gcc -S b.c -o b.s
---
gcc -c a.s -o a.o
gcc -c b.s -o b.o
---
gcc a.o b.o -o a.out
Also there is apparently the option of skipping the .o phase, assembling directly .s files into a binary. Which option you think is the best and why?
-S flags asks gcc to produce human readable assembly code - .o files are nice for a linker but rather cryptic for most human beings...
It is mainly used when you need low level optimization of a (short) piece of code that has been identified by profiling as being a bottleneck. You can compare how the compiler will translate various versions and choose the one that will give the most efficient machine code for that specific implementation.
It is not intended to be used in standard makefiles.
Also there is apparently the option of skipping the .o phase, assembling directly .s files into a binary.
Plain assembly is never transformed directly to executable binary code, there is always in intermediate object-file step.
gcc a.s b.s -o ab.exe
will always call the assembler (twice) which produces object code for either units, and then the objects are linked. Add -v to the command line to see which sub-commands are executed by gcc. gcc is not actually a compiler, it is just a driver program calling jobs depending on options and file extensions. The compiler proper is cc1 (for C code), cc1plus (for C++ code), etc.
Which option you think is the best and why?
-S has the advantage to producing assembly code, however the compiler will always generate assembly code as intermediate step. It's just the case that it's written to temporary files, with 2 notable exceptions:
-save-temps: This will not use some temporary-file names (for example in /tmp), but save the intermediate code in the same place as the objects (there are two flavors actually, -save-temps=obj and -save-temps=src).
-pipe: This will used pipes to transfer code from one sup-program to the next instead of files (except with -save-temps which nullifies -pipe).
Thus, if you want to see the generated assembly, -save-temps might be the way to go. However, that option also applies to the pre-processed code which is saved in .i for C, .ii for C++ and .s for assembly. This is often very appreciated when working with C macros.
In the case you intend to inspect the compiler-generated assembly, you might enjoy -fverbose-asm which injects asm comments that indicate the C/C++ source associated to the assembly. And it might be a good idea not to clutter assembly with debug-info in that case.
Related
I have reinstalled mingw in my system and downloaded the gcc compiler.
I was shocked after compiling the first file which was "subject.c" but the name of the compiled file which gcc returned was "a.exe". It should be "subject.exe" but do not know why this happened.
Can anyone please explain the reason behind this ?
expected:
gcc subject.c
ls
subject.c subject.exe
tried:
gcc subject.c
ls
subject.c a.exe
-o can be used to give the name of the output file.
For example,
gcc -Wall -Wextra -pedantic subject.c -o subject.exe
(Do enable your compiler's warnings!)
gcc names its output files, in the absence of other instructions, a.out or a.exe depending on system environment because that is what it's supposed to do.
To override this default behavior, you can use the -o flag which tells gcc that the next argument is the desired name for the output file. For instance:
gcc -o subject.exe subject.c
There is no automatic functionality built into gcc to strip a source file of its file extension and add .exe to the end but this can be done manually with Makefiles or other similar scripts, for instance you can write a Makefile with the following contents:
%.exe: %.c
gcc -o $# $<
Then a command like make subject.exe would be translated to gcc -o subject.exe subject.c, which may be what you're looking for.
There is functionality built into gcc to strip source files of their extensions during different parts of the compilation process, which may have been what confused you. For instance a call like gcc -c subject.c can be expected to produce an object file called subject.o, likewise gcc -S subject.c can be expected to produce an assembly language file called subject.s, however this does not apply to executable files not only for historical reasons, but because programs can be compiled from multiple source files and there is not always a clear way to choose a name for the executable output.
I'm building an IR level Pass for LLVM which instrument the functions with calls to my runtime library.
So far I have used the following lines to compile any C file with my pass and link it with the runtime library and guaranteeing that the runtime library function calls are inlined.
Compiling source to IR...
clang -S -emit-llvm example.c -o example-codeIR.ll -I ../runtime
Running Pass with opt...
opt -load=../build/PSS/libPSSPass.so -PSSPass -overwrite -always-inline -S -o example-codeOpt.ll example-codeIR.ll
Linking IR with runtime library...
llvm-link -o example-linked.bc example-codeOpt.ll ../runtime/obj/PSSutils.ll
Compiling bitcode to binary...
clang -ldl -O3 -o example example-linked.bc ../initializer/so/shim.so
Now I would like to test my pass with the LLVM testsuite and the only thing I can do is pass flags to the test suite. I can't control the steps of of compilation and generate so many files for each test case.
Is there a way to do the same as above without having to save intermediate files and yet keep the order of the steps?
I have tried the following:
clang -ldl -Xclang -load -Xclang ../build/PSS/libPSSPass.so ../initializer/so/shim.so ../runtime/obj/PSSutils.ll $<
But I ran into the problem that I can't compile both IR and .c files.
If I compile the runtime library to be an object file the functions in it will not get inlined anymore which is the main goal of the above steps.
So to Answer my question:
first of all, call to shared objects are never inlined. hence, the above mentioned shared objects should be compiled to objects instead. The -flto=thin flag should be used when compiling the objects to build a summary of the functions so the linker can perform link time optimizations.
And in the final step of compiling the target you will need to also compile it with -flto=thin flag and the compiler will do the magic for you.
I have a C code and I want to call the functions in it from R by creating a shared object and dynamically loading that object in R. The code to create a shared object in R is:
R CMD SHLIB myfile.c
And the general way is:
gcc -c -Wall -Werror -fpic myfile.c
gcc -shared -o myfile.so myfile.o
I am wondering whether there is any difference between the two myfile.so files created by those different pieces of code in terms of usage in R. The sizes of the two files are quite different (17KB and 32 KB), which confused me.
When you do
gcc -c -Wall -Werror -fpic myfile.c
gcc -shared -o myfile.so myfile.o
you miss several flags that R CMD SHLIB takes, like optimization flag -O2, debugging flag -g, etc. Why not have a look at what is printed to the screen when you do:
R CMD SHLIB myfile.c
Flags I have mentioned have influence on code size, as well as efficiency of your compiled code. The resulting object code is different. You can use disassembler:
objdump -d myfile.so
to check (binary) assembly code as well as code size. You can also use
gcc -S -Wall -Werror -fpic myfile.c
to check (readable) assembly code. You will see huge difference whether you use -O2 or not.
Godbolt compiler explore is a GUI interactive assembler. You type in C code on the left-side window, then choose compiler, compilation flags, output display configuration, etc, then the assembly code will be produced on the right-side window. This is super convenient for HPC code writers to evaluate and optimize their code. For you, this is a handy approach to compare the difference in object code.
I am trying to view the assembly for my simple C application. So, I have tried to produce assembly from binary by using objdump and it produces about 4.3MB sized file with 103228 lines of assembly code. Then, I have tried to do so by providing -S & -save-temps flags to the gcc.
I have used the following three commands:
1. arm-linux-gnueabi-objdump -d hello_simple > hello_simple.dump
2. arm-linux-gnueabi-gcc -save-temps -static hello_simple.c -o hello_simple -lm
3. arm-linux-gnueabi-gcc -S -static hello_simple.c -o hello_simple.asm -lm
In case of 2 & 3, exactly same results are produced, i.e., 65 lines of assembly code. I understand objdump produces some extra details too.
But, why is there a huge difference?
EDIT1: I have used the following command to build that binary:
arm-linux-gnueabi-gcc -static hello_simple.c -o hello_simple -lm
EDIT2: Though, -static and -lm flags may look here unnecessary but, I have to execute this binary on simulator after compile time additions of some assembly components, making them a must.
So, which assembly code should I consider as the most relevant during my analysis of execution traces? (I know it's another question but it would be handy to cover it in the same answer.)
The second two are just saving the asm for your functions.
The first one also has the CRT startup code. And, since you statically linked it, all the library functions you called.
Note that for 3, -static and -lm don't do anything, because you're not linking. gcc foo.c -S -O3 -fverbose-asm -o- | less is often handy.
I notice that none of your command lines included a -O3, or a -march=. You should compile with optimization on, and have gcc optimize your code for the target hardware.
.s is the standard suffix for machine-generated asm. (.S for hand-written asm: gcc foo.S will run it through cpp first). gcc -S produces a .s, the same way -c produces a .o.
For x86, .asm is usually only used for Intel-syntax (NASM/YASM), but IDK what the conventions are for ARM.
So, which assembly code should I consider as the most relevant during my analysis of execution traces?
It depends what you're trying to learn! If you have a good sense of how "expensive" each library function call is (in terms of number of instructions, number of branches polluting the branch-predictors, and data-cache pollution), then you don't need to trace execution through library calls. If you have math library functions that are used from some of your inner loops, then it's worth looking at them if the code is time-critical.
Usually a profiler or single-stepping in a debugger is useful for that, though. Just having disassembly output of a lot of library code is usually just clutter.
I want clang to compile my C/C++ code to LLVM bitcode rather than a binary executable. How can I achieve that?
And if I have the LLVM bitcode, how can I further compile it to a binary executable?
I want to add some of my own code to the LLVM bitcode before compiling to a binary executable.
Given some C/C++ file foo.c:
> clang -S -emit-llvm foo.c
Produces foo.ll which is an LLVM IR file.
The -emit-llvm option can also be passed to the compiler front-end directly, and not the driver by means of -cc1:
> clang -cc1 foo.c -emit-llvm
Produces foo.ll with the IR. -cc1 adds some cool options like -ast-print. Check out -cc1 --help for more details.
To compile LLVM IR further to assembly, use the llc tool:
> llc foo.ll
Produces foo.s with assembly (defaulting to the machine architecture you run it on). llc is one of the LLVM tools - here is its documentation.
Use
clang -emit-llvm -o foo.bc -c foo.c
clang -o foo foo.bc
If you have multiple source files, you probably actually want to use link-time-optimization to output one bitcode file for the entire program. The other answers given will cause you to end up with a bitcode file for every source file.
Instead, you want to compile with link-time-optimization
clang -flto -c program1.c -o program1.o
clang -flto -c program2.c -o program2.o
and for the final linking step, add the argument -Wl,-plugin-opt=also-emit-llvm
clang -flto -Wl,-plugin-opt=also-emit-llvm program1.o program2.o -o program
This gives you both a compiled program and the bitcode corresponding to it (program.bc). You can then modify program.bc in any way you like, and recompile the modified program at any time by doing
clang program.bc -o program
although be aware that you need to include any necessary linker flags (for external libraries, etc) at this step again.
Note that you need to be using the gold linker for this to work. If you want to force clang to use a specific linker, create a symlink to that linker named "ld" in a special directory called "fakebin" somewhere on your computer, and add the option
-B/home/jeremy/fakebin
to any linking steps above.
If you have multiple files and you don't want to have to type each file, I would recommend that you follow these simple steps (I am using clang-3.8 but you can use any other version):
generate all .ll files
clang-3.8 -S -emit-llvm *.c
link them into a single one
llvm-link-3.8 -S -v -o single.ll *.ll
(Optional) Optimise your code (maybe some alias analysis)
opt-3.8 -S -O3 -aa -basicaaa -tbaa -licm single.ll -o optimised.ll
Generate assembly (generates a optimised.s file)
llc-3.8 optimised.ll
Create executable (named a.out)
clang-3.8 optimised.s
Did you read clang documentation ? You're probably looking for -emit-llvm.