Must link to dependent libraries' dependents; why? - c

I have an application - let's call it "P" - that uses custom SO's: A.so, B.so, and C.so
The A.so library uses another custom library - D.so - as well as SSL and the math library.
The "P" application makes no calls whatsoever to SSL or math routines.
On macOS (with clang), I could build P with -lA -lB -lC and all was good.
I'm migrating everything to Linux (Debian) with GCC.
Now, when I bulid P, I have to -lA -lB -lC -lD -lm -lssl
What am I doing wrong?
I'm using simple makefiles - no autoconfig, no cmake, etc.
Could this be an "install_name_tool" vs. "chrpath" issue when I build the libraries?

This is because of a combination of the following two factors:
On macOS, the standard C library (libSystem.dylib) which is automatically linked by the compiler already includes a lot more functionality than on Linux. It may include some (or all) the necessary functions for your program. For example, as you can read from this documentation page, libSystem already includes the math library (which is a separate file on Linux and needs to be linked with -lm).
The shared libraries you are linking were compiled for macOS with additional functions already embedded into them, so those don't need to link to other shared libraries. It is not uncommon for a library to depend on different other libraries on different operating systems.
On the other hand, on Linux, the standard C library is split into different files: -lc is automatically linked by the compiler, but the math library (-lm) and other parts (-ldl, -lpthread, etc) are not, and therefore need to be linked manually.

Solved it. Turns out that clang - regardless of development platform - is a lot more forgiving with the order of parameters and options when linking. With clang, you can pretty much put your -I, -L, -l, other options and object files in whatever order you want. Not so with gcc; it's much more particular. clang made me lazy.

Related

What is the difference between shared and dynamic libraries in C?

I don't understand the difference between the two types of libraries, and many websites say that they are the same thing, but at school we use two different commands to create them
dynamic library
$ gcc -shared -o libsample.so lib.c
$ gcc -o main main.c -ldl
to execute:
$ ./main ./libsample.so
shared library
$ gcc -shared -o libsample.so lib.c
$ gcc -o main main.c -L. -lsample
to execute:
$ LD_LIBRARY_PATH=. ./main
Can someone help me in understanding the difference between the two "codes"?
Dynamic Linked Library (.DLL) is the terminology used by Microsoft Windows. Shared Object (.so) is the terminology used by Unix and Linux.
Other than that, conceptually they're the same.
Regarding your snippets of commands, I guess the difference (and I'm only guessing here, because you didn't show us the relevant parts) is how the library is loaded. There is "link time loading" where the library is tied to the executable by the linker¹. And there is "runtime loading", where the program sort of "ingests" the dynamic/shared library.
runtime loading is done in Windows with the LoadLibrary (there's an …A and a …W variant) function, and on Unix/Linux with dlopen (which is made available by libdl which is linked to by that -ldl library link statement).
1: The linker is the program that creates the actually executable file from the intermediary objects created by the various compiler stages.
Dynamic and shared libraries are usually the same. But in your case, it looks as if you are doing something special.
In the shared library case, you specify the shared library at compile-time. When the app is started, the operating system will load the shared library before the application starts.
In the dynamic libary case, the library is not specified at compile-time, so it's not loaded by the operating system. Instead, your application will contain some code to load the library.
The first case is the normal case. The second case is a special use and it's mainly relevant if your application support extensions such a plug-ins. The dynamic loading is required because there can be many plug-ins and they are built after your application. So their names are not available at compile-time.

How can I compile a library archive with a source code file with gcc?

TL;DR - I need to compile archive.a with test.o to make an executable.
Background - I am trying to call a function in a separate library from a software package I am modifying but the function (a string parser) is creating a segmentation violation. The failure is definitely happening in the library and the developer has asked for a test case where the error occurs. Rather than having him try to compile the rather large software package that I'm working on I'd rather just send him a simple program that calls the appropriate function (hopefully dying at the same place). His library makes use of several system libraries as well (lapack, cblas, etc.) so the linking needs to hit everything I think.
I can link to the .o files that are created when I make his library but of course they don't link to the appropriate system libraries.
This seems like it should be straight forward, but it's got me all flummoxed.
The .a extension indicates that it is a static library. So in order to link against it you can use the switches for the linking stage:
gcc -o myprog -L<path to your library> main.o ... -larchive
Generally you use -L to add the path where libraries are stored (unless it is in the current directory) and you use -l<libname> to sepecify a library. The libraryname is without extension. If the library is named libarchive.a you would still give -larchive.
If you want to specify the full name of the library, then you would use i.e. -l:libname.a
update
If the libraypath is /usr/lib/libmylibrary.a you would use
-L/usr/lib -lmylibrary

How to created a shared library (dylib) using automake that JNI/JNA can use?

How do I convince LibTools to generate a library identical to what gcc does automatically?
This works if I do things explicitly:
gcc -o libclique.dylib -shared disc.c phylip.c Slist.c clique.c
cp libclique.dylib [JavaTestDir]/libclique.dylib
But if I do:
Makefile libclique.la (which is what automake generates)
cp .libs/libclique.1.dylib [JavaTestDir]/libclique.dylib
Java finds the library but can't find the entry point.
I read the "How to create a shared library (.so) in an automake script?" thread and it helped a lot. I got the dylib created with a -shared flag (according to the generated Makefile). But when I try to use it from Java Native Access I get a "symbol not found" error.
Looking at the libclique.la that is generated by Makefile it doesn't seem to have any critical information in it, just looks to be link overloads and moving things around for the convenience of subsequent C/C++ compiler steps (which I don't have), so I would expect libclique.1.dylib to be a functioning dynamic library.
I'm guessing that is where I'm going wrong, but, given that JNA links directly to a dylib and is not compiled with it (per the example in the discussion cited above), it seems all the subsequent compilation steps described in the LibTools manual are moot.
Note: I'm testing on a Mac, but I'm going to have to do this on Windows and Linux machines also, which is why I'm trying to put this into Automake.
Note2: I'm using Eclipse for my Java development and, yes, I did import the dylib.
Thanks
You should be building a plugin and in particular pass
libclique_la_LDFLAGS = -avoid-version -module -shared -export-dynamic
This way you tell libtool you want a dynamically loadable module rather than a shared library (which for ELF are the same thing, but for Mach-O are not.)

How to create a C-library (for Ubuntu) out of header- and source-files?

I am looking for a program to create a C-library (i.e. to link and compile the files into one file) based on .h-files and .c-files that have the following structure (it is a FEC-library: www.openfec.org ). I am using Ubuntu. I want it to do this without manually specifying each files. I tried WAF, but got the error 'ERROR:root: error: No module named cflags'.
Here is (part of the) the structure:
fec
lib_advanced
ldpc_from_file
of_code_profile.h
of_ldpc_ff.h
...
lib_common
linear_binary_codes_utils
binary_matrix
it_decoding
ml_decoding
statistics
of_cb.h
of_debug.h
of_mem.c
of_mem.h
of_openfec_api.c
of_openfec_api.h
of_openfec_profile.h
of_types.h
Thanks!!
You have to use gcc to compile the C-files to objects files.
Then you have to use ar r and then ranlib to pack the objects into one .a library file.
C libraries on *nix systems (including all linux distros) are created with standards tools, this tools being a) a C compiler b) a linker b) the ar utility c) the ranlib utility.
The C compiler 99.9% of the time the GNU C compiler, while the linker ld, and the utilities ar and ranlib are part of the binutils package on gnu systems (99.9% of linux systems).
ar and ranlib are used to created static libraries, putting already compiled object files ( *.o files) in an archive file libsomething.a with ar and indexing the archive with ranlib.
The linker can be called inside the gcc compiler to create dynamic libraries with position independent code, again the already compiled files are archived on a special, this file has the .so extension for shared object.
Static Libraries are used for speed and self-containment, they produce big executables which contain all their dependencies inside the final executable. If a single library of many changes, to update it you'll have to recompiled everything.
Dynamic libraries are compiled and linked separately of the executables, they can be used simultaneously by multiple executables, if one library is updated, you just need to recompile a single library and not every executable which depends on it.
The use of this tools are universal and standard procedure, they can vary by few details from *nix systems to *nix system, but on linux you essentially always use the GCC and Binutils packages. Extra build utilities on the form of make, cmake, autotools, etc exist to help on the process, but the basic tools are always used.
Generally on the most basic level you write a Makefile script which is interpreted by the make utility. And depending on your commands it can make one or both kinds of libraries, install the library and executables, uninstall them, clean up, etc
For more information :
http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html
http://www.dwheeler.com/program-library/Program-Library-HOWTO/

Two method for linking a object using GCC?

I've known that I should use -l option for liking objects using GCC.
that is gcc -o test test.c -L./ -lmy
But I found that "gcc -o test2 test.c libmy.so" is working, too.
When I use readelf for those two executable I can't find any difference.
Then why people use -l option for linking objects? Does it have any advantage?
Because you may have either a static or a shared version of the library in your library directory, e. g. libmy.a and libmy.so, or both of them. This is more relevant to system libraries: if you link to libraries in your local build tree, you know which version you build, static or shared, but you may not know other systems' configuration and libraries mix.
In addition to that, some platforms may have different suffixes. So it's better to specify it in a canonical way.
The main reason is, -lname will search for libname.a (or libname.so, etc.) on the library search list. You can add directories to the library search list with the -L option. It's a convenience built into the compiler driver program, making it easier to find libraries that have been installed in standard places on the system.

Resources