I'm trying to compile an executable (ELF file) that does not use a dynamic loader. I built a cross compiler that compiles mips from linux to be used on a simulator I made. I asserted the flag -static-libgcc on compilation of my hello.cpp file (hello world program). Apparently this is not enough though. Because there is still a segment in my executable which contains the name/path of the dynamic loader. What flags do I use to generate an executable which contains EVERYTHING needed to be run? Do I need to rebuild my cross compiler?
Use the following flags for linking
-static -static-libgcc -static-libstdc++
Use these three flags to link against the static versions of all dependencies (assuming gcc). Note, that in certain situation you don't necessarily need all three flags, but they don't "hurt" either. Therefore just turn on all three.
Check if it actually worked
Make sure that there is really no dynamic linkage
ldd yourexecutable
should return "not a dynamic executable" or something equivalent.
Make sure that there are no unresolved symbols left
nm yourexecutable | grep " U "
The list should be empty or should contain only some special kernel-space symbols like
U __tls_get_addr
Finally, check if you can actually execute your executable
Try using the -static flag?
Related
I have a bunch of object files that have been compiled without the -fPIC option. So the calls to the functions do not use #PLT. (source code is C and is compiled with clang).
I want to link these object files into a shared library that I can load at runtime using dlopen. I need to do this because I have to do a lot of setup before the actual .so is loaded.
But every time I try to link with the -shared option, I get the error -
relocation R_X86_64_PC32 against symbol splay_tree_lookup can not be used when making a shared object; recompile with -fPIC
I have no issues recompiling from source. But I don't want to use -fPIC. This is part of a research project where we are working on a custom compiler. PIC wouldn't work for the type of guarantees we are trying to provide in the compiler.
Is there some flag I can use with ld so that it generate load time relocating libraries. In fact I am okay with no relocations. I can provide a base address for the library and dlopen can fail if the virtual address is not available.
The command I am using for compiling my c files are equivalent to -
clang -m64 -c foo.c
and for linking I am using
clang -m64 -shared *.o -o foo.so
I say equivalent because it is a custom compiler (forked off clang) and has some extra steps. But it is equivalent.
It is not possible to dynamically load your existing non PIC objects with the expectation of it working without problems.
If you cannot recompile the original code to create a proper shared library that supports PIC, then I suggest you create a service executable that links to a static library composed of those objects. The service executable can then provide IPC/RPC/REST API/shared memory/whatever to allow your object code to be used by your program.
Then, you can author a shared library which is compiled with PIC that provides wrapper APIs that launches and communicates with the service executable to perform the actual work.
On further thought, this wrapper API library may as well be static. The dynamic aspect of it is performed by launching the service executable.
Recompiling the library's object files with the -fpic -shared options would be the best option, if this is possible!
man ld says:
-i Perform an incremental link (same as option -r).
-r
--relocatable
Generate relocatable output---i.e., generate an output file that can in turn serve as input to ld. This is often called partial linking. As a side effect, in environments that support standard Unix magic numbers, this option also sets the output file’s magic number to "OMAGIC". If this option is not specified, an absolute file is produced. When linking C++ programs, this option will not resolve references to constructors; to do that, use -Ur.
When an input file does not have the same format as the output file, partial linking is only supported if that input file does not contain any relocations. Different output formats can have further restrictions; for example some "a.out"-based formats do not support partial linking with input files in other formats at all.
I believe you can partially link your library object files into a relocatable (PIC) library, then link that library with your source code object file to make a shared library.
ld -r -o libfoo.so *.o
cp libfoo.so /foodir/libfoo.so
cd foodir
clang -m32 -fpic -c foo.c
clang -m32 -fpic -shared *.o -o foo.so
Regarding library base address:
(Again from man ld)
--section-start=sectionname=org
Locate a section in the output file at the absolute address given by org. You may use this option as many times as necessary to locate multiple sections in the command line. org must be a single hexadecimal integer; for compatibility with other linkers, you may omit the leading 0x usually associated with hexadecimal values. Note: there should be no white space between sectionname, the equals sign ("="), and org.
You could perhaps move your library's .text section?
--image-base value
Use value as the base address of your program or dll. This is the lowest memory location that will be used when your program or dll is loaded. To reduce the need to relocate and improve performance of your dlls, each should have a unique base address and not overlap any other dlls. The default is 0x400000 for executables, and 0x10000000 for dlls. [This option is specific to the i386 PE targeted port of the linker]
I am learning C and I just read the term multiple compilation.Till now I had a single file.c and I used the command gcc file.c to compile it and then ./a.out to execute it. But I got confused a little bit. When should I use the multiple compilation instead of the single and which would be the possible reasons that they will lead me to prefer a multiple compilation instead of the single? I searched it and I found some articles but they didn't cover fully my questions. 1 (this is for c++) , 2If i undestood well if I have some files.c in my project eg file1.c, file2.c and then i want to link them, i execute
gcc file1.c
gcc file.c
gcc file1.o file2.o //somehow i have to create the .o files..
Thank you..
Compilation takes time. There's no point in re-compiling C code that hasn't changed. So, for large projects, it makes sense to split the code into multiple files (typically not randomly of course, but into modules of different functionality) and compile them only when needed.
Linking is the process of taking a bunch of object code (what the .o files are called) and turning them into a single program.
There are many steps to compiling.
When you invoke gcc it will create by default an executable file i.e. all steps in one go:
.c -> .i preprocessor
.i -> .s compiler
.s -> .o assembler
*.o -> a.out linker
Generally the first two take up the most time. If you have a large project then recompiling the entire project may take a lot of time when you are developing. So the compiler allows you to stop at a certain point and reuse previous results of files that have not changed:
gcc -E for preprocess only (rarely used)
gcc -S compile, but don't assemble. Useful for debugging or optimising assembly
gcc -c compile, assemble, but don't link. This is the most commonly used one and produces object files. Those contain your assembled functions (object code), but it's not capable of running because not all functions may be present yet, library functions are missing and the executable header has not been linked in.
The final step gcc -o executable *.o will then take all those and link them together to create an executable. Optionally linking libraries into it.
Generally having all functions in one source file will allow the compiler to do the more optimisations (i.e. inlining), but at the cost of compile time.
Have a look at https://cs.senecac.on.ca/~btp200/pages/images/compile_link.png
My build environment is CentOS 5. I have a third party library called libcunit. I installed it with autotools and it generates both libcunit.a and libcunit.so. I have my own application that links with a bunch of shared libraries. libcunit.a is in the current directory and libcunit.so and other shared libraries are in /usr/local/lib/. When I compile like:
gcc -o test test.c -L. libcunit.a -L/usr/local/lib -labc -lyz
I get a linkage error:
libcunit.a(Util.o): In function `CU_trim_left':
Util.c:(.text+0x346): undefined reference to `__ctype_b'
libcunit.a(Util.o): In function `CU_trim_right':
Util.c:(.text+0x3fd): undefined reference to `__ctype_b'
But when I compile with .so like:
gcc -o test test.c -L/usr/local/lib -lcunit -labc -lyz
it compiles fine and runs fine too.
Why is it giving error when linked statically with libcunit.a?
Why is it giving error when linked statically with libcunit.a
The problem is that your libcunit.a was built on an ancient Linux system, and depends on symbols which have been removed from libc (these symbols were used in glibc-2.2, and were removed from glibc-2.3 over 10 years ago). More exactly, these symbols have been hidden. They are made available for dynamic linking to old binaries (such as libcunit.so) but no new code can statically link to them (you can't create a new executable or shared library that references them).
You can observe this like so:
readelf -Ws /lib/x86_64-linux-gnu/libc.so.6 | egrep '\W__ctype_b\W'
769: 00000000003b9130 8 OBJECT GLOBAL DEFAULT 31 __ctype_b#GLIBC_2.2.5
readelf -Ws /usr/lib/x86_64-linux-gnu/libc.a | egrep '\W__ctype_b\W'
# no output
Didn't notice that the libcunit.a is actually found in your case and the problem with linakge is rather in the CUnit library itself. Employed Russian is absolutely right, and he's not talking about precompiled binary here. We understand that you've built it yourself. However, CUinit itself seems to be relying on the symbol from glibc which is not available for static linking anymore. As a result you have only 2 options:
File a report to the developers of CUnit about this and ask them to fix it;
Use dynamic linking.
Nevertheless, my recommendation about your style of static linkage still applies. -L. is in general bad practice. CUnit is a 3rd party library and should not be placed into the directory containing source files of your project. It should be rather installed in the same way as the dynamic version, i.e. like you have libcunit.so in /usr/local/lib. If you'd supply prefix to Autotools on the configure stage of CUnit, then Autotools would install everything properly. So if you want to link statically with CUnit, consider doing it in the following form:
gcc -o test test.c -L/usr/local/lib -Wl,-Bstatic -lcunit -Wl,-Bdynamic -labc -lyz
I have a static library, liborc-0.4.a with no shared library. I have another library, libschroedinger-1.0.a (no shared) that depends on symbols in liborc-0.4.a. If I run nm on liborc-0.4.a, symbols such as orc_init show up as T (meaning they are defined). I built libschroedinger-1.0.a with the command line flag -lorc-0.4 so it saw the symbols and was ok.
However, now I have a small executable that depends on libschroedinger-1.0.a. It compiles fine, but when I run the linker
gcc -lschroedinger-1.0 -lorc-0.4 -o output input.o
It gives errors such as:
/usr/local/lib/libschroedinger-1.0.a(libschroedinger_1.0_la-schro.o):schro.c:(.text+0x21):
undefined reference to `orc_init'
gcc is sensitive to the order of libraries. When it's compiling liborc-0.4.a in, there is no need for orc_init, so it's not included. The solution is to put the LDFLAGS at the end of the command:
gcc -o output input.o -lschroedinger-1.0 -lorc-0.4
You most probably compiled libschroedinger with shared liborc. Static library is the same as bunch of object files in an archive, so they don't need to see more than headers. Write like the following to be sure (the same apples to liborc).
gcc /path/to/libschroedinger-1.0.a /path/to/liborc-0.4.a -o output input.o
I am attempting to cross-compile on AIX with the xlc/xlC compilers.
The code compiles successfully when it uses the default settings on another machine. The code actually successfully compiles with the cross-compilation, but the problem comes from the linker. This is the command which links the objects together:
$(CHILD_OS)/usr/vacpp/bin/xlC -q32 -qnolib -brtl -o $(EXECUTABLE) $(OBJECT_FILES)
-L$(CHILD_OS)/usr/lib
-L$(CHILD_OS)/usr/vacpp/lib/profiled
-L$(CHILD_OS)/usr/vacpp/lib
-L$(CHILD_OS)/usr/vac/lib
-L$(CHILD_OS)/usr/lib
-lc -lC -lnsl -lpthread
-F$(CHILD_OS)$(CUSTOM_CONFIG_FILE_LOCATION)
When I attempt to link the code, I get several Undefined symbols:
.setsockopt(int,int,int,const void*,unsigned long), .socket(int,int,int), .connect(int,const sockaddr*,unsigned long), etc.
I have discovered that the symbols missing are from the standard c library, libc.a. When I looked up the symbols with nm for the libc.a that is being picked up, the symbols do indeed exist. I am guessing that there might be a problem with the C++ being unable to read the C objects, but I am truly shooting in the dark.
Sound like it might be a C++ name mangling problem.
Run nm on the object files to find out the symbols that they are looking for. Then compare the exact names against the libraries.
Then check the compilation commands, to ensure that the right version of the header files is being included - maybe it's including the parent OS's copy by mistake?
I was eventually able to get around this. It looks like I was using the C++ compiler for .c files. Using the xlc compiler instead of the xlC compiler for C files fixed this problem.