I compare three compilers:
GCC (x86) from Debian (6.3.0) produces hugo_x86.so with 7.8K,
GCC (ppc) from Codesourcery (4.6.0) produces hugo_46.so with 6.6K,
GCC (x86) from Buildroot 2017.08 (7.2.0) produces hugo.so with 7.5K,
GCC (ppc) from Buildroot 2017.08 (7.2.0) produces hugo.so with 67K,
GCC (ppc) from Buildroot 2017.08 (6.4.0) produces hugo.so with 67K and
GCC (ppc) from Buildroot 2017.08 (4.9.4) produces hugo.so with 67K
The code was taken from eli.thegreenplace.net:
int myglob = 42;
int ml_func(int a, int b)
{
myglob += a;
return b + myglob;
}
I ve compiled all sources like this:
powerpc-linux-gcc -c -o hugo.o hugo.c
powerpc-linux-gcc --shared -o hugo.so hugo.o
The difference between the files seems to be a padding (hexdump hugo.so | wc -l):
Debian (6.3.0): 381 lines
Codesourcery (4.6.0): 283 lines
Buildroot 2017.08 (7.2.0): 298 lines
Buildroot 2017.08 (6.4.0): 294 lines
(objdump -s shows a similar result)
Questions:
How can I analyze the difference in the Object file best? (objdump, readelf, etc)
How can I analyze the difference (e.g. default options) of the compiler best? (e.g. -dumpspecs)
What could be the reason, for blowing up the shared object? How to increase the file size?
Thanks!
--
Edit:
It is also independent of the GCC specs. I ve dumped (-dumpspec) the spec of the Codesourcery (4.6.0) GCC which produces a small shared object, and used it with the Buildroot GCC (-specs) and got again a 67K shared object.
from How to reduce ELF section padding?:
It looks like this is due to binutils 2.27 increasing the default page size of PowerPC targets to 64k, resulting in bloated binaries on embedded platforms.
There's a discussion on the crosstool-NG github here.
Configuring binutils with --disable-relro should improve things.
You can also add -Wl,-z,max-page-size=0x1000 to gcc when compiling.
When adding BR2_BINUTILS_EXTRA_CONFIG_OPTIONS="--disable-relro" to my buildroot configuration, the share object size is reduced.
Related
I have a small C program which I need to run on different chips.
The executable should be smaller than 32kb.
For this I have several toolchains with different compilers for arm, mips etc.
The program consists of several files which each is compiled to an object file and then linked together to an executable.
When I use the system gcc (x86) my executable is 15kb big.
With the arm toolchain the executable is 65kb big.
With another toolchain it is 47kb.
For example for arm all objects which are included in the executable are 14kb big together.
The objects are compiled with the following options:
-march=armv7-m -mtune=cortex-m3 -mthumb -msoft-float -Os
For linking the following options are used:
-s -specs=nosys.specs -march-armv7-m
The nosys.specs library is 274 bytes big.
Why is my executable still so much bigger (65kb) when my code is only 14kb and the library 274 bytes?
Update:
After suggestions from the answer I removed all malloc and printf commands from my code and removed the unused includes. Also I added the compile flags -ffunction-sections -fdata-sections and linking flag --gc-sections , but the executable is still too big.
For experimenting I created a dummy program:
int main()
{
return 1;
}
When I compile the program with different compilers I get very different executable sizes:
8.3 KB : gcc -Os
22 KB : r2-gcc -Os
40 KB : arm-gcc --specs=nosys.specs -Os
1.1 KB : avr-gcc -Os
So why is my arm-gcc executable so much bigger?
The avr-gcc executable does static linking as well, I guess.
Your x86 executable is probably being dynamically linked, so any standard library functions you use -- malloc, printf, string and math functions, etc -- are not included in the binary.
The ARM executable is being statically linked, so those functions must be included in your binary. This is why it's larger. To make it smaller, you may want to consider compiling with -ffunction-sections -fdata-sections, then linking with --gc-sections to discard any unused functions or data from your binary.
(The "nosys.specs library" is not a library. It's a configuration file. The real library files are elsewhere.)
The embedded software porting depends on target hardware and software platform.
The hardware platforms are splitted into mcu, cpus which could run linux os.
The software platform includes compiler toolchain and libraries.
It's meaningless to compare the program image size for mcu and x86 hardware platform.
But it's worth to compare the program image size on the same type of CPU using different toolchains.
I'm trying a program about blender deformation. My laptop is amd64 and able to support i386.
faye#fish:/usr/bin$ dpkg --print-foreign-architectures
i386
faye#fish:/usr/bin$ dpkg --print-architecture
amd64
I have no much experience about makefile script. According to the google search info, I made two line code additions in the makefile.mk.
# -I option to help the compiler finding the headers
CFLAGS += $(addprefix -I, $(INCLUDE_PATH))
CC=gcc -m32
Here is the issue:
When I run any template OpenGL code with:
gcc test.c -lm -lpthread -lglut -lGL -lGLU -o test
It seems the code and the libs work correctly.
However, if I do the same to the makefile.mk(CC=gcc), it gives many errors in the following form:
/usr/bin/ld: i386 architecture of input file `../external/lib/libxxxxxxxx.a(xxxxxxxx.o)' is incompatible with i386:x86-64 output
if I use (CC = gcc -m32), the error will switch to:
/usr/bin/ld: cannot find -lglut
/usr/bin/ld: cannot find -lGL
/usr/bin/ld: cannot find -lGLU
I guess maybe there is something wrong in 64 bit os running 32 bit application and libs linking?
-m32, when used with a x86-64 compiler does not mean "compile this program for i386", but actually means "create a program for x86-64 CPUs using only instructions that operate on 32 bit registers".
What you have there is some binary that has been compiled for native i386 and now try to combine it with a program that's compile for x86-64 with just 32 bit registers. Those two don't fit together. The big question here of course is, why do you want to use those i386 binaries at all. There are some good reasons for using 32bit-x86-64 (half the size for pointers and which can massively reduce the memory bandwidth), but in general you want 64 bit binaries. So many problems of 32 bit memory management vanish by virtue of having vast amounts of address space.
Looks like extenal/lib is full of 32 bit precompiled archives. You could track each one down and recompile (or use shared libraries), but that'll be a massive PITA.
Just because your OS supports i386 doesn't mean you've got the libraries installed. In this case of that program, it's enough to install libc6-dev-i386 and freeglut3-dev:i386 packages.
PS: No need to edit anything. make CC='gcc -m32'
Why does gcc generate different executables for different sourcefilenames?
to test I have this c-programm called test.c and test2.c:
int main(){}
"gcc test.c -o test" and "gcc test2.c -o test2" generate different output files. Using a hex-editor I can see that there still is its source-filename hidden in it. Stripping the files still results in different results (the source-filename is gone). Why does gcc operate this way? I tested clang and tcc as well. Clang behaves the like gcc does, whereas tcc generates the same results for different filenames?
gcc version 4.9.1 (Debian 4.9.1-1)
clang 3.4.2-4
tcc version 0.9.25
Doing a diff on the hexdump of both binaries shows a small difference at around offset 0x0280. Looking through the sections (via objdump -x), the differences appear in the .note.gnu.build-id section. My guess is that this provides some sort of UUID for distinguishing different builds of otherwise similar code, as well as validate debug info (referenced here, about a third of the way down).
The -o option of gcc is to specify the output file. If you give him different -o targets, it will generate different files.
gcc test.c -o foo
And you have a foo executable.
Also, note that without a -o option, gcc will output a a.outexecutable.
I am trying to learn the C Calling conventions in assembly language. To do so, I made a simple program using the puts function from the C standard library.
I assembled and linked the program with the following commands :-
nasm -f elf file.asm
gcc -m32 file.asm -o file
The nasm produces the right object file but when running the gcc to link the object files, I am getting error.
Looking at the error I have figured it out that I don't have the 32 bit version of glibc on my system. How can I install it. I already have installed this package installed.
I have 64 bit ubuntu 12.04 as my OS.
EDIT :- I have installed the following packages, but the problem is still not solved :-
1)ia32-libs
2) libc6-i386
This command will install the 32bit glibc libraries on 64 bit Ubuntu:
sudo apt-get install gcc-multilib
This is the proper syntax for linking assembly object code into an executable using gcc:
gcc -m32 objectfile.o -o executablefile
(nasm -felf32 already creates objectfile.o; the .asm file should not appear on GCC's command line. GCC can assemble+link a .S file in one step using GAS syntax, but NASM is a separate package.)
I assembled and linked the program with the following commands :-
nasm -f elf file.asm
gcc -m32 file.asm -o file
This is wrong. Your first nasm command is probably creating a file.o file (and you should check that, e.g. with ls -l file.o). The second gcc command does not do what you wish.
But gcc does not know about *.asm file extensions (it knows about .S for preprocessable GNU assembler syntax, and .s for assembler code, but probably handle unknown extensions like .asm as ELF object files by default, however file.asm is not an ELF object file). You should try linking with
gcc -Wall -v -m32 file.o -o file
Notice that you give to GCC an object file in ELF (for the linker invoked by gcc) which you previously produced with nasm.
(you might later remove the -v option to gcc)
Alternatively, use the GNU as assembler syntax (not the nasm one), name your file file.S (if you want it to be preprocessed) or file.s (without preprocessing) and use gcc -v -Wall -m32 file.s -o myprog to compile it.
BTW, to understand more about calling conventions, read the x86-64 ABI spec (and the similar one for 32 bits x86 ...), make a small C example file some-example.c, then run gcc -S -fverbose-asm -O some-example.c and look into the produced some-example.s with an editor or pager.
Learn also more about ELF then use readelf (& objdump) appropriately.
You want to install a package called 'ia32-libs'
I'm trying to compile a code (not mine) that consists of mixed Fortran and C source files, which are compiled into a library. This library can either be linked against directly, or (more usefully) driven from a python class. I have previously successfully built the code as 32-bit with g77 and gcc, but I've encountered a situation in which the code uses big chunks of memory, and needs to be 64-bit.
I've attempted to build as both 64-bit only, or as a universal binary, with gfortran 4.2.3 (binary dist from the AT&T R project) and the system gcc (4.2). The source files build correctly, but when I attempt to link against the library, I get many "Undefined Symbols" errors for a number of the Fortran functions. An nm on the library shows that the symbols appear to exist, but obviously the linker isn't finding them.
Here are two (of many) of the compile commands (which produce no errors):
/usr/local/bin/gfortran -arch ppc -arch i386 -arch x86_64 -fPIC -fno-strength-reduce -fno-common -ff2c -Wall -c lsame.f
gcc -c -I/Users/keriksen/Research/atomic_data/fac -I/Users/keriksen/Research/atomic_data/fac/faclib -O2 -fPIC -fno-strength-reduce -fno-common pmalloc.c
And the link step, which bombs:
gcc -o sfac sfac.c stoken.c -I/Users/keriksen/Research/atomic_data/fac -I/Users/keriksen/Research/atomic_data/fac/faclib -O2 -fPIC -fno-strength-reduce -fno-common -L/Users/keriksen/Research/atomic_data/fac -lfac -lm -lgfortran -lgcc
A sample Undefined Symbol:
"_acofz1", referenced from:
_HydrogenicDipole in libfac.a(coulomb.o)
_HydrogenicDipole in libfac.a(coulomb.o)
and the corresponding nm that shows that symbol exists:
niobe:atomic_data/fac[14] nm libfac.a | grep acof
0000000000000000 T _acofz1_
0000000000002548 S _acofz1_.eh
U _acofz1
Am I doing something stupid, like not including a necessary switch to the linker, or is something more subtle going on here?
Per Yuji's suggestion:
The cfortran.h header file (available on the Web, and apparently fairly widely used) that handles the C/Fortran compatibility issues does not handle gfortran out of the box. I hacked it (incorrectly) to ignore this fact, and paid for my insolence.
The specific issue is that gfortran emits object code containing symbols with a trailing underscore, gcc does not.
The correct thing to do was to set the environment variable CPPFLAGS to -Df2cFortran. Once this macro is set, cfortran.h adds the necessary underscore to the symbols in the calling C functions' object code, and the linker is happy.
The other surprising thing in this particular instance was the configure script looks at the F77 environment variable for the name of the fortran compiler. Once I set F77 to "/usr/local/bin/gfortran -arch x86_64" the generated Makefile was correct, and it produced 64-bit only object code. I'm not sure if this is standard configure behavior, or if it a peculiarity of this particular script.
The upshot is that I now have a 64-bit shared library which plays nicely with a 64-bit python I downloaded. I'm half-considering going back and trying to produce a universal library that will work with the system python, but I'm not sure I want to tempt fate.