Can I make gcc ignore static libraries when linking shared libraries? - c

I've encountered a few cases building projects which use shared libraries or dynamic-loaded modules where the module/library depends on another library, but doesn't check that a shared copy is available before trying to link. This causes object files from a static archive (.a file) to get pulled into the resulting .so, and since these object files are non-PIC, the resulting .so file either has TEXTRELs (very bad load performance and memory usage) or fails altogether (on archs like x86_64 that don't support non-PIC shared libraries).
Is there any way I can make the gcc compiler driver refuse to link static library code into shared library output? It seems difficult and complicated by the possible need to link minimal amounts from libgcc.a and the like...

As you know, you can use -static to only link against static libraries, but there doesn't appear to be a good equivalent to only linking against dynamic libraries.
The following answer may be useful...
How to link using GCC without -l nor hardcoding path for a library that does not follow the libNAME.so naming convention?
You can use -l:[libraryname].so to list the dynamic libraries you want to link against in your library search path. Specifying the .so ending will probably help with your dynamic library only case. You will probably have to specify the whole name with the 'lib' prefix instead of just the shortened version.

Related

Does everything that may end up in a shared library always need to be compiled with -fPIC?

I'm building a shared library. I need only one function in it to be public.
The shared library is built from a few object files and several static libraries. The linker complains that everything should be build with -fPIC. All the object files and most static libraries were built without this option.
This makes me ask a number of questions:
Do I have to rebuild every object file and every static library I need for this dynamic lib with -fPIC? Is it the only way?
The linker must be able to relocate object files statically, during linking. Correct? Otherwise if object files used hardcoded constant addresses they could overlap with each other. Shouldn't this mean that the linker has all the information necessary to create the global offset table for each object file and everything else needed to create a shared library?
Should I always use -fPIC for everything in the future as a default option, just in case something may be needed by a dynamic library some day?
I'm working on Linux on x86_64 currently, but I'm interested in answers about any platform.
You did not say which platform you use but on Linux it's a requirement to compile object files that go into your library as position independent code (PIC). This includes static libraries at least in practice.
Yes. See load time relocation of shared libraries and position independent code pic in shared libraries.
I only use -fPIC when compiling object files that go into libraries to avoid unecessary overhead.

Can I force a dynamic library to link to a specific dynamic library dependency?

I'm building an dynamic library, libfoo.so, which depends on libcrypto.so.
Within my autotools Makefile.am file, I have a line like this:
libfoo_la_LIBADD += -L${OPENSSL_DIR}/lib -lcrypto
where $OPENSSL_DIR defaults to /usr but can be overridden by passing --with-openssl-dir=/whatever.
How can I ensure that an executable using libfoo.so uses ${OPENSSL_DIR}/lib/libcrypto.so (only) without the person building or running the executable having to use rpath or fiddle with LD_LIBRARY_PATH?
As things stand, I can build libfoo and pass --with-openssl-dir=/usr/local/openssl-special and it builds fine. But when I run ldd libfoo.so, it just points to the libcrypto.so in /usr/lib.
The only solution I can think of is statically linking libcrypto.a into libfoo.so. Is there any other approach possible?
Details of runtime dynamic linking vary from platform to platform. The Autotools can insulate you from that to an extent, but if you care about the details, which apparently you do, then it probably is not adequate to allow the Autotools to choose for you.
With that said, however, you seem to be ruling out just about all possibilities:
The most reliable way to ensure that at runtime you get the specific implementation you linked against at build time is to link statically. But you say you don't want that.
If you instead use dynamic libraries then you rely on the dynamic linker to associate a library implementation with your executable at run time. In that case, there are two general choices for how you can direct the DL to a specific library implementation:
Via information stored in the program / library binary. You are using terminology that suggests an ELF-based system, and for ELF shared objects, it is the RPATH and / or RUNPATH that convey information about where to look for required libraries. There is no path information associated with individual library requirements; they are identified by SONAME only. But you say you don't want to use RPATH*, and so I suppose not RUNPATH either.
Via static or dynamic configuration of the dynamic linker. This is where LD_LIBRARY_PATH comes in, but you say you don't want to use that. The dynamic linker typically also has a configuration file or files, such as /etc/ld.so.conf. There you can specify library directories to search, and, with a bit of care, the order to search them.
Possibly, then, you can cause your desired library implementation to be linked to your application by updating the dynamic linker's configuration files to cause it to search the wanted path first. This will affect the whole system, however, and it's brittle.
Alternatively, depending on details of the nature of the dependency, you could give your wanted version of libcrypto a distinct SONAME. Effectively, that would make it a different object (e.g. libdjcrypto) as far as the static and dynamic linkers are concerned. But that is risky, because if your library has both direct and indirect dependencies on libcrypto, or if a program using your library depends on libcrypto via another path, then you'll end up at run time (dynamically) linking both libraries, and possibly even using functions from both, depending on the origin of each call.
Note well that the above issue should be a concern for you if you link your library statically, too. If that leaves any indirect dynamic dependencies on libcrypto in your library, or any dynamic dependencies from other sources in programs using your library, then you will end up with multiple versions of libcrypto in use at the same time.
Bottom line
For an executable, the best options are either (1) all-static linkage or (2) (for ELF) RPATH / LD_LIBRARY_PATH / RUNPATH, ensuring that all components require the target library via the same SONAME. I tend to like providing a wrapper script that sets LD_LIBRARY_PATH, so that its effect is narrowly scoped.
For a reusable library, "don't do that" is probably the best alternative. The high potential for ending up with programs simultaneously using two different versions of the other library (libcrypto in your case) makes all available options unattractive. Unless, of course, you're ok with multiple library versions being used by the same program, in which case static linkage and RPATH / RUNPATH (but not LD_LIBRARY_PATH) are your best available alternatives.
*Note that at least some versions of libtool have a habit of adding RPATH entries whether you ask for them or not -- something to watch out for. You may need to patch the libtool scripts installed in your project to avoid that.

Does the linker prefer .so files over .a files?

I'm building Julia using a local LLVM build which contains both libLLVM*.so files and corresponding libLLVM*.a files. This was built first with BUILD_SHARED_LIBS=ON, which is responsible for the presence of the libLLVM*.so files.
libjulia.so, the library used by the julia executable, always linked to the libLLVM*.so files, even when I rebuilt LLVM with BUILD_SHARED_LIBS=OFF(the default config). llvm-config --libs $LIB's output with and without BUILD_SHARED_LIBS=ON didn't vary much and nothing seem to hint at llvm-config issuing linking options that'd direct the linker to link either *.so files or *.a files.
Why is this the case ? Is it s default behaviour of the linker to use .so files even when .a files of the same name exist ? Or, is there a build configuration cache that Julia reuses ?
Yes, to fulfil the option -lfoo, ld will by default link libfoo.so in preference to libfoo.a if both
are found in the same search directory, and when it finds either one it
will look no further.
You can enforce linkage of static libraries only by passing -static to the linkage,
but in that case static versions must be found for all libraries - including
default system libraries - not just those you explicitly mention.
To selectively link a static library libfoo.a, without specifying -static,
you can use the explicit form of the -l option: -l:libfoo.a rather than
-lfoo.
llvm-config will emit library options in the -lfoo form whether you build
static or shared libraries, since those options will work correctly for
either, but you need to understand when using them how the linker
behaves. If you don't tell it otherwise, it will link the shared rather
than the static library when it faces the choice.
Later
Why does ld prefer to link shared libraries over static ones?
AFAIK, it is not on record why the developers of ld made this decision long
ago, but the reason is obvious: If dynamic linkage is the default then
executables, by default, will not physically include additional copies of code
that can be provided to all executables by a single shared copy, from a shared library. Thus
executables, by default, will economize their code size and the aggregate of
excecutables that constitutes your system or mine will be vastly smaller than
it would have to be without sharing. Shared libraries and dynamic linkage
were invented so that systems need not be be bloated with duplicated code.
Dynamic linkage brings with it the complication that an executable
linked with shared libraries, when distributed to a system other than the
one on which it was built, does not carry its dynamic dependencies with it. It's
for that reason that all the approved mechanisms for installing a new binaries
on systems - package managers - ensure that all of their dynamic dependencies
are installed as well.

Why gcc does not support linking dynamical library into static binary

The background is following: there is 3'rd party provider that provides us with a libveryfancylib.so, in 32b. Softaware that uses the library has quite a load of other linux library dependencies (like QT) also, but they are open source, so no problem for statical linking. The target platform is 64b and running Debian 7.
We can ship the program with binary + dynamical libraries, no problem, but i would rather see single static binary with no dependencies.
So my question is: why i cannot link the dynamical library into static binary? I mean what bit of information is there missing, or is it just feature that is rarely needed -> not implemented.
We can ship the program with binary + dynamical libraries, no problem, but i would rather see single static binary with no dependencies.
What is the problem you are trying to solve?
You can follow the model most commercial applications on Linux do: put your executable, shared libraries and other resources in one directory (possibly with subdirectories). When linking your executable against those shared libraries pass -Wl,-rpath,'$ORIGIN' (in make use -Wl,-rpath,'$$ORIGIN') to the linker, so that when starting your application the runtime linker looks for required shared libraries in the same directory where executable is.
Then archive that directory and give it to your users.
There are programs for MS Windows that can do so, eg DLL to Lib and DLL to Static Lib.
In the open source world, there isn't really much of an incentive to develop such a tool as you can always recompile from source (but of course it's possible that someone somewhere did it anyway).
It's because dynamic libraries and static libraries are two different things. A static library is just an archive of object files (much like a zip archive). A dynamic library is more like an executable program.
So you can't really link anything into a static library, you can only add more object files.

Difference between "dynamically loading a library file" and "specifying .so path in Makefile"?

I recently came across a code which loads a .so file with dl_open() and works with dlsym() etc. I understand that dl_open() this would load the dynamic library file. What is the difference between dynamically loading a library file and specifying .so path in Makefile?
Another question is that if I want to dynamically load a library file, do I need to compile it with -rdynamic option?
Aren't both these compiled with -fPIC flag?
Dynamically loading a library file is frequently used in implementing software plugins.
unlike specifying .so path in Makefile or static linking, dynamic linking will allow a computer program to startup in the absence of these libraries, to discover available libraries, and to potentially gain additional functionality.
link
If you statically link an .so file in your Makefile then you won't be able to build the application unless it is present. It has the advantage of no nasty surprises at run time.
When creating a shared object, assuming you are using gcc, then -fpic only means the code can be relocated at run-time, you need -shared as well. I don't know the -rdynamic option, but compilers differ.
Loading the module at run-time allows the module load to be optional. For example, say you have a huge application with 300 modules, each representing different functionality. Does it make sense to map all 300 when a user might only use 10% of them? (code is loaded on demand anyway) It can also be used to load versions from different libraries at runtime, giving flexibility. Downside is that you can end-up loading incompatible versions.

Resources