‘Feature detecting’ required compile flags in Make (liconv on Mac) - linker

Ubuntu has libiconv built-in to it’s standard c library and does not require it in LDFLAGS. OS X does not have it built-in and requires the flag to be set.
My current approach is using ifeq in my Makefile to conditionally set LDFLAG += -liconv when on OS X.
I am wondering if there is a better approach? I am heavily influenced by the feature-detection mindset of web development and hope I can use a similar approach to detect whether the flag is required on the current system or not.

For manually written makefiles individually setting the contents of the LDFLAGS and friends is an acceptable way of doing it. This may be simpler if there is only couple of features needed.
If you want to generically detect the feature one route to go would be to use a makefile generator like CMake or Automake.
The final option would be to manually perform the runtime checks that Automake does. That is to have a sample file that you compile, link with one of the options and see if it works. If it doesn't try the next and so on. This way you are testing what flags are required for each feature. Automake does this onece, when the project is ./configured but you could do it every time make is run or whenever is good for your build setup.

Related

autoconf: best practice for including `libftdi` in `configure.ac`

Hello everybody out there using GNU autoconf,
What is the best practice to look for libftdi and including it with autoconf for compiling a C program using it?
The following snippet from a configure.ac file works, but I'm not sure, whether it is best practice:
PKG_CHECK_MODULES([LIBFTDI], [libftdi])
#AC_CHECK_LIB([ftdi],[ftdi],[ftdi]) # Why doesn't this work?
#AC_SEARCH_LIBS([ftdi],[ftdi],[ftdi]) # Why doesn't this work?
#AC_CHECK_HEADERS([ftdi.h],[],[echo "error: missing libftdi header files" && exit 1])
LIBS="-lftdi $LIBS $LDFLAGS" # works, but is this the best way?
I'm building the program with autoconf (GNU Autoconf) 2.69 and compiling it with gcc version 7.5.0 on Ubuntu 18.04.
Why your other attempts failed
Library tests
Your commented-out AC_CHECK_LIB and AC_SEARCH_LIBS examples do not demonstrate correct usage. Usage details are presented in the manual, but to sum up:
the arguments to AC_CHECK_LIB are
The simple name of the library, i.e. ftdi
The name of a characteristic function provided by the library
(optional) Code for configure to execute in the event that the library is found. Default is to prepend a link option to $LIBS and define a HAVE_LIB* preprocessor macro.
(optional) Code for configure to execute in the event that the library is not found
(optional) Additional library link options (not already in $LIBS) that are needed to link a program that uses the library being checked
the arguments to AC_SEARCH_LIBS are
The name of the function to search for
A list of one or more library names to search
(optional) Code for configure to execute in the event that the library is found, in addition to prepending a link option to $LIBS (but not defining any preprocessor macro)
(optional) Code for configure to execute in the event that the library is not found
(optional) Additional library link options (not already in $LIBS that are needed to link a program that uses the library being checked
Neither your AC_CHECK_LIB example nor your AC_SEARCH_LIBS example properly designates an existing libftdi function to check for. Moreover, the third argument in each case is unlikely to be valid shell / Autoconf code, so in the event that the library were found, configure would probably crash. Better might be:
AC_CHECK_LIB([ftdi], [ftdi_init])
or
AC_SEARCH_LIBS([ftdi_init], [ftdi])
Depending on what exactly you want to do, on details of libftdi, and on the configure.ac context, you might need to provide appropriate values for some or all of the optional arguments.
The main reasons for a library check to fail despite the library in fact being installed are
the library being installed in a location that is not in the default search path
the library having link dependencies on other libraries, and those have not (yet) been accounted for at the time of the check
The former is analogous to header installation location considerations discussed in the next section. The latter can be addressed by adding explicit extra link flags via the fifth argument to AC_CHECK_LIB or AC_SEARCH_LIBS, but is more often addressed semi-automatically by performing AC_CHECK_LIB or AC_SEARCH_LIBS tests in reverse prerequisite order, so that the value of LIBS is built up with an appropriately-ordered list of link flags, ready at each point to support the next check, and ultimately appropriate for supporting the overall compilation.
Note also that libftdi provides both C and C++ interfaces. In ftdi_init, I have been careful to choose a function that has C linkage, so as to avoid C++ name-mangling issues (see How to test a C++ library usability in configure.in?). You may also need to ensure that the tests are run with the C compiler (see Language Choice in the Autoconf manual).
Header test
Your AC_CHECK_HEADERS usage, on the other hand, does not appear to be inherently wrong. If the resulting configure script does not detect ftdi.h, then that implies that the header isn't in the compiler's default header search path. That might happen, for example, if it is installed in a subdirectory, such as /usr/include/ftdi. This would be a matter of both ftdi and system installation convention.
If it is ftdi convention for the headers to be installed in a subdirectory, then your source files should specify that in their #include directives:
#include <ftdi/ftdi.h>
If your source files in fact do that, then that should also be what you tell Autoconf to look for:
AC_CHECK_HEADERS([ftdi/ftdi.h])
Regardless of whether a subdirectory prefix is expected or used, it is good practice to accommodate the possibility of headers and / or libraries being installed in a non-standard location. Although one can always do that by specifying appropriate flags in the CPPFLAGS variable in configure's environment, I prefer and recommend using AC_ARG_WITH to designate a --with argument or AC_ARG_VAR to designate an environment variable that configure will consult for the purpose. For example,
AC_ARG_WITH([ftdi-includedir],
[AS_HELP_STRING([--with-ftdiincludedir=dir],
[specifies a custom directory for the libftdi header files])],
[CPPFLAGS="$CPPFLAGS -I$withval"]
)
Exposing an argument or environment variable for the specific purpose highlights (in the output of ./configure --help) the fact that this is a knob that the user might need to adjust. Additionally, receiving the include directory via a for-purpose vector is sometimes useful for limiting in which compilations the designated include directory is made available.
On PKG_CHECK_MODULES
The Autotools objective and philosophy is to support the widest possible array of build machines and environments by minimizing external dependencies and writing the most portable configuration and build code possible. To this end, the Autotools are designed so that they themselves are not required to build projects on supported systems. Rather, Autoconf produces configure as a stand-alone, highly portable shell script, and Automake produces configurable templates for highly portable makefiles. These are intended to be included in source packages, to be used as-is on each build system. Making your configure script dependent on pkg-config being installed on every system where your project is to be built, as using PKG_CHECK_MODULES does, conflicts with those objectives.
How significant an issue that may be is a subject of some dispute. Where it is available, pkg-config can be very useful, especially for components that require complex build flags. PKG_CHECK_MODULES is thus very convenient for both package maintainer and package builder on those systems where it is present or readily available, for those components that provide pkg-config metadata.
But pkg-config is not necessarily available for every system targeted by your software. It cannot reasonably be assumed present or obtainable even on systems for which it is nominally available. And even on systems that have it, pkg-config metadata for the libraries of interest are not necessarily installed with the libraries.
As such, I urge you to avoid using PKG_CHECK_MODULES in your Autoconf projects. You need to know how to do without it in any case, because it is not an option for some libraries. Where appropriate, provide hooks by which the builder can supply appropriate flags, and let them choose whether to use pkg-config in conjunction with those. Decoupling configure from pkg-config in this way makes a bit more work for you, and in some cases for builders, but it is more flexible.
Your PKG_CHECK_MODULES example
Your example invocation appears ok in itself, supposing that "libftdi" is the appropriate pkg-config module name (you have to know the appropriate name):
PKG_CHECK_MODULES([LIBFTDI], [libftdi])
But although that may yield a configure script that runs successfully, it does not, in itself, do much for you. In particular, it verifies that pkg-config metadata for the named module is present, but
it does not verify the presence or test the use of the library or header
although it does set some output variables containing compile and link flags, you do not appear to be using those
specifically, if you're going to rely on pkg-config, then you should use the link flags it reports to you instead of hardcoding -lftdi, and that alone.
Furthermore, it is more typical to use the output variables created by PKG_CHECK_MODULES in your makefile than to use them to update $LIBS or other general variables inside configure. If you do use them in configure, however, then it is essential to understand that LIBS and LDFLAGS have different roles with little overlap. It is generally inappropriate, not to mention unnecessary, to include the LDFLAGS in LIBS. If you want to update LIBS inside configure, then this would be the way to do it:
LIBS="$LIBFTDI_LIBS $LIBS"
And if you're going to do that, then you probably should do the same with the compiler flags reported by pkg-config, if any:
CFLAGS="$CFLAGS $LIBFTDI_CFLAGS"
You can check it like this:
AC_CHECK_LIB([ftdi],[ftdi_init],[],[echo "error: missing libftdi library" && exit 1],[])
LDFLAGS="-lftdi $LDFLAGS"
The second argument for AC_CHECK_LIB is a function exported by the library, and in this case the init call works well.
If libftdi uses pkg-config (and it appears to, given you said that snippet works), the PKG_CHECK_MODULES is what you want. The default for action-if-not-found is to error out, so if this is a required dependency, it's exactly what you want.
But you shouldn't use LIBS that way. First because LDFLAGS does not have the same semantics as LIBS, second because the pkg-config file might have provided you with further search paths that are required.
Instead you should add to your Makefile.am the flags as you need them:
mytarget_CFLAGS = $(LIBFTDI_CFLAGS)
mytarget_LDADD = $(LIBFTDI_LIBS)
You can refer to my Autotools Mythbuster — Dependency Discovery for further details on how to use pkg-config for dependencies. You can see there how you would generally use AC_CHECK_LIB or AC_SEARCH_LIB, but seriously if pkg-config works, stick to that, as it's more reliable and consistent.

CMake add_subdirectory use different compiler [duplicate]

It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).

How to determine if platform library is static or dynamic from autotools?

Configuration
I use autotools (autoreconf -iv and ./configure) to generate correct makefiles. On my development machine (Fedora) everything works correctly. For make check I use the library libcheck and from autotools I use Libtools. On Fedora the library for check is dynamic: libcheck.so.0.0.0 or something such. It works.
The Issue
When I push the commits to my repo on github and do a pull request, the result is tested on Travis CI which uses Ubuntu as a platform. Now on Ubuntu the libcheck is a static library: libcheck.a and a libcheck_pic.a.
When Travis does a make check I get the following error message:
/usr/bin/ld: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../..
/../libcheck.a(check.o): relocation R_X86_64_32 against.rodata.str1.1' can
not be used when making a shared object; recompile with -fPIC`
/usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../../../libcheck.a: could not read
symbols: Bad value
Which means I have to somehow let configure determine what library I need. I suspect I need libcheck_pic.a for Ubuntu and the regular libcheck.so for Fedora.
The question
Does anyone know how to integrate this into configure.ac and test/Makefile.am using libtool? I would prefer to stay in line with autotools way of life.
I couldn't find usable information using google, but there is a lot of questions out there about what the difference is between static and dynamic (which is not what I need).
I would much appreciate if anyone could point me in the right direction, or maybe even solved this already?
I suspect you're right that the library you want to use on the CI system is libcheck_pic.a, for its name suggests that the routines within are compiled as position-independent code, exactly as the error message you receive suggests that you do.
One way to approach the problem, then, would be to use libcheck_pic if it is available, and to fall back to plain libcheck otherwise. That's not too hard to configure your Autotools-based build system to do. You then record the appropriate library name in an output variable, and use that in your (auto)make files.
Autoconf's SEARCH_LIBS macro specifically serves this kind of prioritized library search requirement, but it has the side effect, likely unwanted in this case, of modifying the LIBS variable. You can nevertheless make it work. Something like this might do it, for example:
LIBS_save=$LIBS
AC_SEARCH_LIBS([ck_assert], [check_pic check], [
# Optional: add a test to verify that the chosen lib really provides PIC code.
# Set LIBCHECK to the initial substring of $LIBS up to but excluding the first space.
LIBCHECK=${LIBS%% *}
], [
# or maybe make it a warning, and disable your test suite when libcheck
# is not available.
AC_MSG_ERROR([A PIC version of libcheck is required])
])
AC_OUTPUT([LIBCHECK])
LIBS=$LIBS_save
I presume you know what to do with $(LIBCHECK) on the Make side.
As written, that has the limitation that if there is no PIC version of libcheck available then you won't find out until make or maybe make check. That's undesirable, and you could add Autoconf code to detect that situation if it is undesirable enough.
As an altogether different approach, you could consider building your tests statically (add -static to the appropriate *_LDFLAGS variable). Of course, this has the opposite problem: if a static version of the library is not available, then the build or tests fail. Also, it requires building a static version of your own code if you're not doing so already, and furthermore, it is the static version that will be tested.
For greatest flexibility, you could consider combining those two approaches. You might set up a fallback from one to the other, or you might set up separate targets for testing static code and PIC code, and exercise whichever of them (possibly both) is supported by the libraries available on the build system.

Switch between different GCC versions

I recently built an older version of GCC and installed it in my home directory (spec. ~/local/gcc-5.3.0). However, I need this compiler only for CUDA projects, and will be working with the system compiler (GCC 6.2.1) rest of the time. So, I guess I need to find a way to switch between these as and when needed, and in a way that also changes the library and include paths appropriately.
I understand that update-alternatives is one way to do so, but it seems to require root permissions to be set up, which I don't have.
The next best thing might be to write a shell function in .bashrc that ensures the following:
Each call switches between system and local gcc
Whenever a switch is made, it adjusts paths so that when local gcc is chosen, it first looks for header files and libraries that were installed by itself before looking in system paths like /usr/local/include or usr/local/lib. A previous answer suggests that modifying LD_LIBRARY_PATH should be sufficient, because a GCC installation "knows" where its own header files and static libraries are (I am not sure if it's correct, I was thinking I might need to modify CPATH, etc).
Is the above the best way to achieve this? If so, what paths should I set while implementing such a function?
Is the above the best way to achieve this? If so,
what paths should I set while implementing such a function?
As others pointed out, PATH and LD_LIBRARY_PATH are mandatory. You may also update MANPATH for completeness.
Rather than reinventing the wheel in .bashrc I suggest to employ a little known but extremely handy and modular Environment Modules that were designed for this specific purpose. You could use them like (once you set up config for gcc/3.1.1):
$ module load gcc/3.1.1
$ which gcc
/usr/local/gcc/3.1.1/linux/bin/gcc
$ module unload gcc
$ which gcc
gcc not found

Best practices for CFLAGS handling in configure scripts?

I'm not a fan of autoconf, but in the interest of principle of least surprise, I'm trying to make my (non-autoconf) configure scripts behave as closely as possible to what users expect from an autoconf-based build system. The GNU coding standards are actually rather reasonable on this topic, and explicitly mention the possibility of not using autoconf/automake but providing a compatible interface in another way:
https://www.gnu.org/prep/standards/standards.html#Configuration
One issue I cannot find any good description of, however, is how to best handle CFLAGS. It's clear to me that any essential flags (like -I$(srcdir)/inc) that are none of the user's business don't belong in CFLAGS in either the configure script or the makefile, as overriding them would completely break the build. So these I've either hard-coded in the makefile, or (if they require some detection) put the detection logic in configure, but passed them through a separate make variable instead of CFLAGS so they won't get overridden.
What I'm still unclear on how to handle best, though, is optional things like optimization levels/options, warning flags, debugging, etc. Should flags enabled by things like --enable-debug or --enable-warnings get added to CFLAGS or passed in some other make variable? What if they conflict with flags already in user-provided CFLAGS?
I can see where for a lot of projects the answer might should just be "don't mess with that at all and let the user choose their CFLAGS", but some of my usage cases are projects where the user base is generally interested in having out-of-the-box optimized and debugging configurations.
Edit: Another issue I forgot: when it comes to detecting and automatically using certain CFLAGS that aren't essential but preferred, should this be done only if the user left CFLAGS blank in the environment, or always?
If you want to handle argument like --enable-warnings, you have a lot of work to do. If you want to keep it simple, you can have that argument add something like
CFLAGS="$CFLAGS -Wall -Wextra"; CPPFLAGS="$CPPFLAGS -DWARNINGS"
in your configure script, but this actually opens a big can of worms. Are those options recognized by all compilers, or are you assuming that your user is using a compiler which recognizes those options and for which they do what you want? Probably you need to wrap that assignment in another check for a set of compilers you know about. Or you need to put in a check that the compiler being used by the user at least doesn't error out when passed those flags (perhaps the user has already set -Wsome in CFLAGS, and the compiler will error out with the conflicting argument -Wall.) The same problems apply to --enable-debug. It seems perfectly reasonable to respond to --enable-debug by appending -DEBUG to CPPFLAGS, so the user does not need to examine your source to determine if debugging is enabled via -DEBUG rather than -DDEBUG, but you cannot predict all situations and it is generally best to err on the side of caution and not do such things.
If you want your users to have "out-of-the-box optimized and debugging configurations", the configure script for the project is the wrong place to do it. Keep the configury simple, and have you users use a package management system like pkgsrc for such things. If you want your distribution tarball to have some of that functionality, provide a set of scripts (perhaps invoked automagically by the configure script after detecting a platform, but provide the ability to override that feature) for common platforms that make assignments for the user and thus provide the desired functionality. But keep the configure script itself bare-bones.
It would be sane enough to have the user expect to have to wrestle with things when providing custom flags. CFLAGS can be appended, CPPFLAGS can be prepended (header search paths options look at the first path, while compiler optimizations and directives look at the last options (or rather, they override prior options))
--enable-debug and perhaps other command-line directives provided to the configure script do not only have to change compiler options, but have the possibility of changing things within the source as well (maybe, for example, redefining inline macros to be actual functions), so their uses are somewhat different.
In conclusion, have user-specified CPPFLAGS prepended, user specified CFLAGS appended; any configure script option may be either appended or prepended depending on context.

Resources