autoconf: best practice for including `libftdi` in `configure.ac` - c

Hello everybody out there using GNU autoconf,
What is the best practice to look for libftdi and including it with autoconf for compiling a C program using it?
The following snippet from a configure.ac file works, but I'm not sure, whether it is best practice:
PKG_CHECK_MODULES([LIBFTDI], [libftdi])
#AC_CHECK_LIB([ftdi],[ftdi],[ftdi]) # Why doesn't this work?
#AC_SEARCH_LIBS([ftdi],[ftdi],[ftdi]) # Why doesn't this work?
#AC_CHECK_HEADERS([ftdi.h],[],[echo "error: missing libftdi header files" && exit 1])
LIBS="-lftdi $LIBS $LDFLAGS" # works, but is this the best way?
I'm building the program with autoconf (GNU Autoconf) 2.69 and compiling it with gcc version 7.5.0 on Ubuntu 18.04.

Why your other attempts failed
Library tests
Your commented-out AC_CHECK_LIB and AC_SEARCH_LIBS examples do not demonstrate correct usage. Usage details are presented in the manual, but to sum up:
the arguments to AC_CHECK_LIB are
The simple name of the library, i.e. ftdi
The name of a characteristic function provided by the library
(optional) Code for configure to execute in the event that the library is found. Default is to prepend a link option to $LIBS and define a HAVE_LIB* preprocessor macro.
(optional) Code for configure to execute in the event that the library is not found
(optional) Additional library link options (not already in $LIBS) that are needed to link a program that uses the library being checked
the arguments to AC_SEARCH_LIBS are
The name of the function to search for
A list of one or more library names to search
(optional) Code for configure to execute in the event that the library is found, in addition to prepending a link option to $LIBS (but not defining any preprocessor macro)
(optional) Code for configure to execute in the event that the library is not found
(optional) Additional library link options (not already in $LIBS that are needed to link a program that uses the library being checked
Neither your AC_CHECK_LIB example nor your AC_SEARCH_LIBS example properly designates an existing libftdi function to check for. Moreover, the third argument in each case is unlikely to be valid shell / Autoconf code, so in the event that the library were found, configure would probably crash. Better might be:
AC_CHECK_LIB([ftdi], [ftdi_init])
or
AC_SEARCH_LIBS([ftdi_init], [ftdi])
Depending on what exactly you want to do, on details of libftdi, and on the configure.ac context, you might need to provide appropriate values for some or all of the optional arguments.
The main reasons for a library check to fail despite the library in fact being installed are
the library being installed in a location that is not in the default search path
the library having link dependencies on other libraries, and those have not (yet) been accounted for at the time of the check
The former is analogous to header installation location considerations discussed in the next section. The latter can be addressed by adding explicit extra link flags via the fifth argument to AC_CHECK_LIB or AC_SEARCH_LIBS, but is more often addressed semi-automatically by performing AC_CHECK_LIB or AC_SEARCH_LIBS tests in reverse prerequisite order, so that the value of LIBS is built up with an appropriately-ordered list of link flags, ready at each point to support the next check, and ultimately appropriate for supporting the overall compilation.
Note also that libftdi provides both C and C++ interfaces. In ftdi_init, I have been careful to choose a function that has C linkage, so as to avoid C++ name-mangling issues (see How to test a C++ library usability in configure.in?). You may also need to ensure that the tests are run with the C compiler (see Language Choice in the Autoconf manual).
Header test
Your AC_CHECK_HEADERS usage, on the other hand, does not appear to be inherently wrong. If the resulting configure script does not detect ftdi.h, then that implies that the header isn't in the compiler's default header search path. That might happen, for example, if it is installed in a subdirectory, such as /usr/include/ftdi. This would be a matter of both ftdi and system installation convention.
If it is ftdi convention for the headers to be installed in a subdirectory, then your source files should specify that in their #include directives:
#include <ftdi/ftdi.h>
If your source files in fact do that, then that should also be what you tell Autoconf to look for:
AC_CHECK_HEADERS([ftdi/ftdi.h])
Regardless of whether a subdirectory prefix is expected or used, it is good practice to accommodate the possibility of headers and / or libraries being installed in a non-standard location. Although one can always do that by specifying appropriate flags in the CPPFLAGS variable in configure's environment, I prefer and recommend using AC_ARG_WITH to designate a --with argument or AC_ARG_VAR to designate an environment variable that configure will consult for the purpose. For example,
AC_ARG_WITH([ftdi-includedir],
[AS_HELP_STRING([--with-ftdiincludedir=dir],
[specifies a custom directory for the libftdi header files])],
[CPPFLAGS="$CPPFLAGS -I$withval"]
)
Exposing an argument or environment variable for the specific purpose highlights (in the output of ./configure --help) the fact that this is a knob that the user might need to adjust. Additionally, receiving the include directory via a for-purpose vector is sometimes useful for limiting in which compilations the designated include directory is made available.
On PKG_CHECK_MODULES
The Autotools objective and philosophy is to support the widest possible array of build machines and environments by minimizing external dependencies and writing the most portable configuration and build code possible. To this end, the Autotools are designed so that they themselves are not required to build projects on supported systems. Rather, Autoconf produces configure as a stand-alone, highly portable shell script, and Automake produces configurable templates for highly portable makefiles. These are intended to be included in source packages, to be used as-is on each build system. Making your configure script dependent on pkg-config being installed on every system where your project is to be built, as using PKG_CHECK_MODULES does, conflicts with those objectives.
How significant an issue that may be is a subject of some dispute. Where it is available, pkg-config can be very useful, especially for components that require complex build flags. PKG_CHECK_MODULES is thus very convenient for both package maintainer and package builder on those systems where it is present or readily available, for those components that provide pkg-config metadata.
But pkg-config is not necessarily available for every system targeted by your software. It cannot reasonably be assumed present or obtainable even on systems for which it is nominally available. And even on systems that have it, pkg-config metadata for the libraries of interest are not necessarily installed with the libraries.
As such, I urge you to avoid using PKG_CHECK_MODULES in your Autoconf projects. You need to know how to do without it in any case, because it is not an option for some libraries. Where appropriate, provide hooks by which the builder can supply appropriate flags, and let them choose whether to use pkg-config in conjunction with those. Decoupling configure from pkg-config in this way makes a bit more work for you, and in some cases for builders, but it is more flexible.
Your PKG_CHECK_MODULES example
Your example invocation appears ok in itself, supposing that "libftdi" is the appropriate pkg-config module name (you have to know the appropriate name):
PKG_CHECK_MODULES([LIBFTDI], [libftdi])
But although that may yield a configure script that runs successfully, it does not, in itself, do much for you. In particular, it verifies that pkg-config metadata for the named module is present, but
it does not verify the presence or test the use of the library or header
although it does set some output variables containing compile and link flags, you do not appear to be using those
specifically, if you're going to rely on pkg-config, then you should use the link flags it reports to you instead of hardcoding -lftdi, and that alone.
Furthermore, it is more typical to use the output variables created by PKG_CHECK_MODULES in your makefile than to use them to update $LIBS or other general variables inside configure. If you do use them in configure, however, then it is essential to understand that LIBS and LDFLAGS have different roles with little overlap. It is generally inappropriate, not to mention unnecessary, to include the LDFLAGS in LIBS. If you want to update LIBS inside configure, then this would be the way to do it:
LIBS="$LIBFTDI_LIBS $LIBS"
And if you're going to do that, then you probably should do the same with the compiler flags reported by pkg-config, if any:
CFLAGS="$CFLAGS $LIBFTDI_CFLAGS"

You can check it like this:
AC_CHECK_LIB([ftdi],[ftdi_init],[],[echo "error: missing libftdi library" && exit 1],[])
LDFLAGS="-lftdi $LDFLAGS"
The second argument for AC_CHECK_LIB is a function exported by the library, and in this case the init call works well.

If libftdi uses pkg-config (and it appears to, given you said that snippet works), the PKG_CHECK_MODULES is what you want. The default for action-if-not-found is to error out, so if this is a required dependency, it's exactly what you want.
But you shouldn't use LIBS that way. First because LDFLAGS does not have the same semantics as LIBS, second because the pkg-config file might have provided you with further search paths that are required.
Instead you should add to your Makefile.am the flags as you need them:
mytarget_CFLAGS = $(LIBFTDI_CFLAGS)
mytarget_LDADD = $(LIBFTDI_LIBS)
You can refer to my Autotools Mythbuster — Dependency Discovery for further details on how to use pkg-config for dependencies. You can see there how you would generally use AC_CHECK_LIB or AC_SEARCH_LIB, but seriously if pkg-config works, stick to that, as it's more reliable and consistent.

Related

Purpose of default startfiles and linker scripts in arm-none-eabi toolchain?

Arm-none-eabi toolchain is always used with -nostartfiles flag to exclude default crt's and with custom linker script per each microcontroller, so what is the purpose of default ones of them (which are under /usr/lib/arm-none-eabi/)? When do they used?
I am not aware of any circumstances that you would use the default linker script unedited, although it is a useful template to set up your own version.
The default start files contain initialization routines that you may or may not need.
If you are compiling for C only and you do not use any C library functions which are impure then you can get away with -nostartfiles, however if you are using C++ or library functions which keep some state then the start files may perform necessary initialization. You can read the source of your C library for details.
If you write your linker script by hand then there is really no need to specify -nostartfiles because you can have more fine-grained control over what is included and excluded in the linker script. Having the startfiles available as linker input is harmless if you use the --gc-sections linker option to remove everything that is not referenced.
For example, many embedded targets are intended to boot and run forever and never shutdown/finalize/exit. You can exclude .fini and related sections in your link script without risking missing something that you need by using -nostartfiles.
The best way to look at this is to compile a real project that you are working on with and without the option and see if there is any difference in the map file or disassembly.

Can I force a dynamic library to link to a specific dynamic library dependency?

I'm building an dynamic library, libfoo.so, which depends on libcrypto.so.
Within my autotools Makefile.am file, I have a line like this:
libfoo_la_LIBADD += -L${OPENSSL_DIR}/lib -lcrypto
where $OPENSSL_DIR defaults to /usr but can be overridden by passing --with-openssl-dir=/whatever.
How can I ensure that an executable using libfoo.so uses ${OPENSSL_DIR}/lib/libcrypto.so (only) without the person building or running the executable having to use rpath or fiddle with LD_LIBRARY_PATH?
As things stand, I can build libfoo and pass --with-openssl-dir=/usr/local/openssl-special and it builds fine. But when I run ldd libfoo.so, it just points to the libcrypto.so in /usr/lib.
The only solution I can think of is statically linking libcrypto.a into libfoo.so. Is there any other approach possible?
Details of runtime dynamic linking vary from platform to platform. The Autotools can insulate you from that to an extent, but if you care about the details, which apparently you do, then it probably is not adequate to allow the Autotools to choose for you.
With that said, however, you seem to be ruling out just about all possibilities:
The most reliable way to ensure that at runtime you get the specific implementation you linked against at build time is to link statically. But you say you don't want that.
If you instead use dynamic libraries then you rely on the dynamic linker to associate a library implementation with your executable at run time. In that case, there are two general choices for how you can direct the DL to a specific library implementation:
Via information stored in the program / library binary. You are using terminology that suggests an ELF-based system, and for ELF shared objects, it is the RPATH and / or RUNPATH that convey information about where to look for required libraries. There is no path information associated with individual library requirements; they are identified by SONAME only. But you say you don't want to use RPATH*, and so I suppose not RUNPATH either.
Via static or dynamic configuration of the dynamic linker. This is where LD_LIBRARY_PATH comes in, but you say you don't want to use that. The dynamic linker typically also has a configuration file or files, such as /etc/ld.so.conf. There you can specify library directories to search, and, with a bit of care, the order to search them.
Possibly, then, you can cause your desired library implementation to be linked to your application by updating the dynamic linker's configuration files to cause it to search the wanted path first. This will affect the whole system, however, and it's brittle.
Alternatively, depending on details of the nature of the dependency, you could give your wanted version of libcrypto a distinct SONAME. Effectively, that would make it a different object (e.g. libdjcrypto) as far as the static and dynamic linkers are concerned. But that is risky, because if your library has both direct and indirect dependencies on libcrypto, or if a program using your library depends on libcrypto via another path, then you'll end up at run time (dynamically) linking both libraries, and possibly even using functions from both, depending on the origin of each call.
Note well that the above issue should be a concern for you if you link your library statically, too. If that leaves any indirect dynamic dependencies on libcrypto in your library, or any dynamic dependencies from other sources in programs using your library, then you will end up with multiple versions of libcrypto in use at the same time.
Bottom line
For an executable, the best options are either (1) all-static linkage or (2) (for ELF) RPATH / LD_LIBRARY_PATH / RUNPATH, ensuring that all components require the target library via the same SONAME. I tend to like providing a wrapper script that sets LD_LIBRARY_PATH, so that its effect is narrowly scoped.
For a reusable library, "don't do that" is probably the best alternative. The high potential for ending up with programs simultaneously using two different versions of the other library (libcrypto in your case) makes all available options unattractive. Unless, of course, you're ok with multiple library versions being used by the same program, in which case static linkage and RPATH / RUNPATH (but not LD_LIBRARY_PATH) are your best available alternatives.
*Note that at least some versions of libtool have a habit of adding RPATH entries whether you ask for them or not -- something to watch out for. You may need to patch the libtool scripts installed in your project to avoid that.

How to Require an autotools project / get the cflags for an autotools package?

I want to require a c library which was build by with the autotools.
To be honest I have little to no idea how they work :/
(The library which I want to require is "https://github.com/p4lang/PI")
I have executed the ./configure etc. scripts and successfully installed it.
When I search my usr I find the library under /usr/local/lib/libpi.a
and analogously the header files under /usr/local/include/PI.
I build my project with cmake and would like to have a cross platform solution with it.
However I would be satisfied to use the pkg-config command.
Does anybody know what is the "correct" / "recommended" way to get cflags,
or at least a variant in which I do not have to hard code the paths?
The involvement of the Autotools ends at the point where the built artifacts are installed on the system. Using those does not go through the Autotools.* This applies just as much when the installed artifacts are libraries and headers as when they are executables. There's nothing special or different about using Autotools-built programs or libraries.
I build my project with cmake and would like to have a cross platform
solution with it. However I would be satisfied to use the pkg-config
command.
Just like projects served by any other build system, Autotools projects can build and install pkg-config configuration files, or CMake macros, or whatever other bits and pieces they might think appropriate to assist users, but this is project-specific. The Autotools do not create such additional pieces of their own accord, but some Autotools-based projects do add them. And some don't, just like some CMake projects don't, and some projects with hand-rolled build systems don't, etc..
Does anybody know what is the "correct" / "recommended" way to get cflags, or at least a variant in which I do not have to hard code the paths?
Note that typically, for a library whose name you know, the only flags you might need are those specifying the location of the library headers and / or one specifying the location of the libraries themselves. Even these are unnecessary if the relevant pieces are installed in places that the compiler looks by default. Also these are generally not considered CFLAGS, per se. Terminology varies a bit, but the former is a preprocessor flag, and the latter is a link flag.
Since you're using CMake, you could consider writing CMake code to search likely directories for the wanted libraries and headers, and to set the results in suitable variables for other code to use. That's more of an Autotools-style approach, though. Alternatively, you could define a user-set variable by which the wanted location(s) can be specified to CMake. This assumes that the third-party project is not already providing something useful for the purpose. Or, licensing permitting, you could package the third-party library together with your own, so that you are in control of where it gets installed.
In the general case, however, this is simply something that people have to deal with themselves when they build software. Make life easier for them by providing good documentation of what your project's dependencies are, and of how to inform the build system of their locations, and make useful provisions for feeding that information into the build system.
*An exception could be asserted for use of libtool archives, which an Autotools project might install alongside regular libraries -- if one wanted to use those, they would directly or indirectly go through libtool. But in practice, that's only going to happen in another Autotools project.

why can I use stdio.h without a corresponding stdio.c [duplicate]

This may seem a little stupid:) But it's been bothering a while. When I include some header files which are written by others in my C++/C program, how does the compiler know where is the implementation of the class member function declared in the header files?
Say I want to write some program which takes advantage of the OpenCV library. Normally I would want to use:
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
However, these are just header files which, as far as I can tell, only declares functions but without implementation. Then how does the compiler know where to find the implementation? Especially when I want to build a .so file.
There is a similar post. Basically it said thrid-party library, esp. commercial product don't release source code, so they ship the lib file with the header. However, it didn't make clear how does the compiler know where to find the lib file. In addition, The answer in that post mentioned if I want to compile the code of my own, I would need the source code of the implementation of those header files. Does that mean I cannot build a .so file without the source of the implementation?
In general, the implementation is distributed as form of pre-compiled libraries. You need to tell the compiler where they are located.
For example, for gcc, quoting the online manual
-llibrary
-l library
Search the library named library when linking. [...]
and,
-Ldir
Add directory dir to the list of directories to be searched for -l.
Note: you don't need to explicitly specify the standard libraries, they are automatically linked. Rather, if you don't want them to be linked with you binary, you need to inform the compiler by passing the -nostdlib option.
The exact answer is platform specific, but in general I'd say that some libraries are in fact header-only, and others include the implementation of the library's methods in binary object files. I believe OpenCV belongs to the second kind, i.e. provides the implementation in object files, for either dynamic or static linking, and your program links against them. If your build works, then it is already configured to link against those libraries. At this point the details become very much platform and build-system specific.
Note that for common platforms like Windows, Mac and Linux you seldom need to build popular libraries like OpenCV yourself. You mentioned .so files, which implies dynamic linking on Linux. This library is open-source so in theory you could build it yourself, but in practice I'd much rather use my distribution's package installation tool (e.g. apt-get or yum) to install opencv-dev (or something similar) from my distribution's repository.
As the others already explained, you need to tell your compiler where to look for the files.
This implies that you should know which path to specify for your compiler.
Some components provide a mechanism where you don't need to know the exact path but can automatically retrieve it from your system.
For example if you want to compile using GTK+3 you need to specify these flags for your compiler:
CFLAGS:= -I./ `pkg-config --cflags gtk+-3.0`
LIBS:= -lm `pkg-config --libs gtk+-3.0`
This will automatically result in the required include and library path flags for GCC.
The compiler toolchain contains at least two major tools: the compiler and the link editor (it is very common to name compiler the whole chain, but strictly speaking it is wrong).
The compiler is in charge of producing object code from the available source code. In that phase the compiler knows where to locate standard headers, and can be told to use non-standard dirs to locate headers. For example, gcc uses -I to let you specify some more alternate dirs that may contains headers.
The link editor is in charge of producing executable files (its basic common usage) from object codes. To produce an executable it needs to find every implementation of declared things at compile-time for which you didn't provide source code. These can be other object codes, object codes in libraries, etc. The link editor knows where are located standard libraries and can be told to let you specify non-standard dirs. For example you can tell the gcc toolchain to use alternate dirs with L that may contain libraries. You may be aware that link edition is now usually a two phase process: location-and-name-resolution at link-time and real link-edition at run-time (dynamic libraries are very common).
Basically a library is just a collection of object code. Consult internet to see how you can easily build libraries either from source code of from object code.

CMake: how to produce binaries "as static as possible"

I would like to have control over the type of the libraries that get found/linked with my binaries in CMake. The final goal is, to generate binaries "as static as possible" that is to link statically against every library that does have a static version available. This is important as would enable portability of binaries across different systems during testing.
ATM this seems to be quite difficult to achieve as the FindXXX.cmake packages, or more precisely the find_library command always picks up the dynamic libraries whenever both static and dynamic are available.
Tips on how to implement this functionality - preferably in an elegant way - would be very welcome!
I did some investigation and although I could not find a satisfying solution to the problem, I did find a half-solution.
The problem of static builds boils down to 3 things:
Building and linking the project's internal libraries.
Pretty simple, one just has to flip the BUILD_SHARED_LIBS switch OFF.
Finding static versions of external libraries.
The only way seems to be setting CMAKE_FIND_LIBRARY_SUFFIXES to contain the desired file suffix(es) (it's a priority list).
This solution is quite a "dirty" one and very much against CMake's cross-platform aspirations. IMHO this should be handled behind the scenes by CMake, but as far as I understood, because of the ".lib" confusion on Windows, it seems that the CMake developers prefer the current implementation.
Linking statically against system libraries.
CMake provides an option LINK_SEARCH_END_STATIC which based on the documentation: "End a link line such that static system libraries are used."
One would think, this is it, the problem is solved. However, it seems that the current implementation is not up to the task. If the option is turned on, CMake generates a implicit linker call with an argument list that ends with the options passed to the linker, including -Wl,-Bstatic. However, this is not enough. Only instructing the linker to link statically results in an error, in my case: /usr/bin/ld: cannot find -lgcc_s. What is missing is telling gcc as well that we need static linking through the -static argument which is not generated to the linker call by CMake. I think this is a bug, but I haven't managed to get a confirmation from the developers yet.
Finally, I think all this could and should be done by CMake behind the scenes, after all it's not so complicated, except that it's impossible on Windows - if that count as complicated...
A well made FindXXX.cmake file will include something for this. If you look in FindBoost.cmake, you can set the Boost_USE_STATIC_LIBS variable to control whether or not it finds static or shared libraries. Unfortunately, a majority of packages do not implement this.
If a module uses the find_library command (most do), then you can change CMake's behavior through CMAKE_FIND_LIBRARY_SUFFIXES variable. Here's the relevant CMake code from FindBoost.cmake to use this:
IF(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .lib .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ELSE(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ENDIF(WIN32)
You can either put this before calling find_package, or, better, you can modify the .cmake files themselves and contribute back to the community.
For the .cmake files I use in my project, I keep all of them in their own folder within source control. I did this because I found that having the correct .cmake file for some libraries was inconsistent and keeping my own copy allowed me to make modifications and ensure that everyone who checked out the code would have the same build system files.

Resources