Switch between different GCC versions - c

I recently built an older version of GCC and installed it in my home directory (spec. ~/local/gcc-5.3.0). However, I need this compiler only for CUDA projects, and will be working with the system compiler (GCC 6.2.1) rest of the time. So, I guess I need to find a way to switch between these as and when needed, and in a way that also changes the library and include paths appropriately.
I understand that update-alternatives is one way to do so, but it seems to require root permissions to be set up, which I don't have.
The next best thing might be to write a shell function in .bashrc that ensures the following:
Each call switches between system and local gcc
Whenever a switch is made, it adjusts paths so that when local gcc is chosen, it first looks for header files and libraries that were installed by itself before looking in system paths like /usr/local/include or usr/local/lib. A previous answer suggests that modifying LD_LIBRARY_PATH should be sufficient, because a GCC installation "knows" where its own header files and static libraries are (I am not sure if it's correct, I was thinking I might need to modify CPATH, etc).
Is the above the best way to achieve this? If so, what paths should I set while implementing such a function?

Is the above the best way to achieve this? If so,
what paths should I set while implementing such a function?
As others pointed out, PATH and LD_LIBRARY_PATH are mandatory. You may also update MANPATH for completeness.
Rather than reinventing the wheel in .bashrc I suggest to employ a little known but extremely handy and modular Environment Modules that were designed for this specific purpose. You could use them like (once you set up config for gcc/3.1.1):
$ module load gcc/3.1.1
$ which gcc
/usr/local/gcc/3.1.1/linux/bin/gcc
$ module unload gcc
$ which gcc
gcc not found

Related

autoconf: best practice for including `libftdi` in `configure.ac`

Hello everybody out there using GNU autoconf,
What is the best practice to look for libftdi and including it with autoconf for compiling a C program using it?
The following snippet from a configure.ac file works, but I'm not sure, whether it is best practice:
PKG_CHECK_MODULES([LIBFTDI], [libftdi])
#AC_CHECK_LIB([ftdi],[ftdi],[ftdi]) # Why doesn't this work?
#AC_SEARCH_LIBS([ftdi],[ftdi],[ftdi]) # Why doesn't this work?
#AC_CHECK_HEADERS([ftdi.h],[],[echo "error: missing libftdi header files" && exit 1])
LIBS="-lftdi $LIBS $LDFLAGS" # works, but is this the best way?
I'm building the program with autoconf (GNU Autoconf) 2.69 and compiling it with gcc version 7.5.0 on Ubuntu 18.04.
Why your other attempts failed
Library tests
Your commented-out AC_CHECK_LIB and AC_SEARCH_LIBS examples do not demonstrate correct usage. Usage details are presented in the manual, but to sum up:
the arguments to AC_CHECK_LIB are
The simple name of the library, i.e. ftdi
The name of a characteristic function provided by the library
(optional) Code for configure to execute in the event that the library is found. Default is to prepend a link option to $LIBS and define a HAVE_LIB* preprocessor macro.
(optional) Code for configure to execute in the event that the library is not found
(optional) Additional library link options (not already in $LIBS) that are needed to link a program that uses the library being checked
the arguments to AC_SEARCH_LIBS are
The name of the function to search for
A list of one or more library names to search
(optional) Code for configure to execute in the event that the library is found, in addition to prepending a link option to $LIBS (but not defining any preprocessor macro)
(optional) Code for configure to execute in the event that the library is not found
(optional) Additional library link options (not already in $LIBS that are needed to link a program that uses the library being checked
Neither your AC_CHECK_LIB example nor your AC_SEARCH_LIBS example properly designates an existing libftdi function to check for. Moreover, the third argument in each case is unlikely to be valid shell / Autoconf code, so in the event that the library were found, configure would probably crash. Better might be:
AC_CHECK_LIB([ftdi], [ftdi_init])
or
AC_SEARCH_LIBS([ftdi_init], [ftdi])
Depending on what exactly you want to do, on details of libftdi, and on the configure.ac context, you might need to provide appropriate values for some or all of the optional arguments.
The main reasons for a library check to fail despite the library in fact being installed are
the library being installed in a location that is not in the default search path
the library having link dependencies on other libraries, and those have not (yet) been accounted for at the time of the check
The former is analogous to header installation location considerations discussed in the next section. The latter can be addressed by adding explicit extra link flags via the fifth argument to AC_CHECK_LIB or AC_SEARCH_LIBS, but is more often addressed semi-automatically by performing AC_CHECK_LIB or AC_SEARCH_LIBS tests in reverse prerequisite order, so that the value of LIBS is built up with an appropriately-ordered list of link flags, ready at each point to support the next check, and ultimately appropriate for supporting the overall compilation.
Note also that libftdi provides both C and C++ interfaces. In ftdi_init, I have been careful to choose a function that has C linkage, so as to avoid C++ name-mangling issues (see How to test a C++ library usability in configure.in?). You may also need to ensure that the tests are run with the C compiler (see Language Choice in the Autoconf manual).
Header test
Your AC_CHECK_HEADERS usage, on the other hand, does not appear to be inherently wrong. If the resulting configure script does not detect ftdi.h, then that implies that the header isn't in the compiler's default header search path. That might happen, for example, if it is installed in a subdirectory, such as /usr/include/ftdi. This would be a matter of both ftdi and system installation convention.
If it is ftdi convention for the headers to be installed in a subdirectory, then your source files should specify that in their #include directives:
#include <ftdi/ftdi.h>
If your source files in fact do that, then that should also be what you tell Autoconf to look for:
AC_CHECK_HEADERS([ftdi/ftdi.h])
Regardless of whether a subdirectory prefix is expected or used, it is good practice to accommodate the possibility of headers and / or libraries being installed in a non-standard location. Although one can always do that by specifying appropriate flags in the CPPFLAGS variable in configure's environment, I prefer and recommend using AC_ARG_WITH to designate a --with argument or AC_ARG_VAR to designate an environment variable that configure will consult for the purpose. For example,
AC_ARG_WITH([ftdi-includedir],
[AS_HELP_STRING([--with-ftdiincludedir=dir],
[specifies a custom directory for the libftdi header files])],
[CPPFLAGS="$CPPFLAGS -I$withval"]
)
Exposing an argument or environment variable for the specific purpose highlights (in the output of ./configure --help) the fact that this is a knob that the user might need to adjust. Additionally, receiving the include directory via a for-purpose vector is sometimes useful for limiting in which compilations the designated include directory is made available.
On PKG_CHECK_MODULES
The Autotools objective and philosophy is to support the widest possible array of build machines and environments by minimizing external dependencies and writing the most portable configuration and build code possible. To this end, the Autotools are designed so that they themselves are not required to build projects on supported systems. Rather, Autoconf produces configure as a stand-alone, highly portable shell script, and Automake produces configurable templates for highly portable makefiles. These are intended to be included in source packages, to be used as-is on each build system. Making your configure script dependent on pkg-config being installed on every system where your project is to be built, as using PKG_CHECK_MODULES does, conflicts with those objectives.
How significant an issue that may be is a subject of some dispute. Where it is available, pkg-config can be very useful, especially for components that require complex build flags. PKG_CHECK_MODULES is thus very convenient for both package maintainer and package builder on those systems where it is present or readily available, for those components that provide pkg-config metadata.
But pkg-config is not necessarily available for every system targeted by your software. It cannot reasonably be assumed present or obtainable even on systems for which it is nominally available. And even on systems that have it, pkg-config metadata for the libraries of interest are not necessarily installed with the libraries.
As such, I urge you to avoid using PKG_CHECK_MODULES in your Autoconf projects. You need to know how to do without it in any case, because it is not an option for some libraries. Where appropriate, provide hooks by which the builder can supply appropriate flags, and let them choose whether to use pkg-config in conjunction with those. Decoupling configure from pkg-config in this way makes a bit more work for you, and in some cases for builders, but it is more flexible.
Your PKG_CHECK_MODULES example
Your example invocation appears ok in itself, supposing that "libftdi" is the appropriate pkg-config module name (you have to know the appropriate name):
PKG_CHECK_MODULES([LIBFTDI], [libftdi])
But although that may yield a configure script that runs successfully, it does not, in itself, do much for you. In particular, it verifies that pkg-config metadata for the named module is present, but
it does not verify the presence or test the use of the library or header
although it does set some output variables containing compile and link flags, you do not appear to be using those
specifically, if you're going to rely on pkg-config, then you should use the link flags it reports to you instead of hardcoding -lftdi, and that alone.
Furthermore, it is more typical to use the output variables created by PKG_CHECK_MODULES in your makefile than to use them to update $LIBS or other general variables inside configure. If you do use them in configure, however, then it is essential to understand that LIBS and LDFLAGS have different roles with little overlap. It is generally inappropriate, not to mention unnecessary, to include the LDFLAGS in LIBS. If you want to update LIBS inside configure, then this would be the way to do it:
LIBS="$LIBFTDI_LIBS $LIBS"
And if you're going to do that, then you probably should do the same with the compiler flags reported by pkg-config, if any:
CFLAGS="$CFLAGS $LIBFTDI_CFLAGS"
You can check it like this:
AC_CHECK_LIB([ftdi],[ftdi_init],[],[echo "error: missing libftdi library" && exit 1],[])
LDFLAGS="-lftdi $LDFLAGS"
The second argument for AC_CHECK_LIB is a function exported by the library, and in this case the init call works well.
If libftdi uses pkg-config (and it appears to, given you said that snippet works), the PKG_CHECK_MODULES is what you want. The default for action-if-not-found is to error out, so if this is a required dependency, it's exactly what you want.
But you shouldn't use LIBS that way. First because LDFLAGS does not have the same semantics as LIBS, second because the pkg-config file might have provided you with further search paths that are required.
Instead you should add to your Makefile.am the flags as you need them:
mytarget_CFLAGS = $(LIBFTDI_CFLAGS)
mytarget_LDADD = $(LIBFTDI_LIBS)
You can refer to my Autotools Mythbuster — Dependency Discovery for further details on how to use pkg-config for dependencies. You can see there how you would generally use AC_CHECK_LIB or AC_SEARCH_LIB, but seriously if pkg-config works, stick to that, as it's more reliable and consistent.

CMake add_subdirectory use different compiler [duplicate]

It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).

‘Feature detecting’ required compile flags in Make (liconv on Mac)

Ubuntu has libiconv built-in to it’s standard c library and does not require it in LDFLAGS. OS X does not have it built-in and requires the flag to be set.
My current approach is using ifeq in my Makefile to conditionally set LDFLAG += -liconv when on OS X.
I am wondering if there is a better approach? I am heavily influenced by the feature-detection mindset of web development and hope I can use a similar approach to detect whether the flag is required on the current system or not.
For manually written makefiles individually setting the contents of the LDFLAGS and friends is an acceptable way of doing it. This may be simpler if there is only couple of features needed.
If you want to generically detect the feature one route to go would be to use a makefile generator like CMake or Automake.
The final option would be to manually perform the runtime checks that Automake does. That is to have a sample file that you compile, link with one of the options and see if it works. If it doesn't try the next and so on. This way you are testing what flags are required for each feature. Automake does this onece, when the project is ./configured but you could do it every time make is run or whenever is good for your build setup.

What is the point of using `-L` when there is `LD_LIBRARY_PATH`?

After reading this question, my first reaction was that the user is not seeing the error because he specifies the location of the library with -L.
However, apparently, the -L option only influences where the linker looks, and has no influence over where the loader looks when you try to run the compiled application.
My question then is what's the point of -L? Since you won't be able to run your binary unless you have the proper directories in LD_LIBRARY_PATH anyway, why not just put them there in the first place, and drop the -L, since the linker looks in LD_LIBRARY_PATH automatically?
It might be the case that you are cross-compiling and the linker is targeting a system other than your own. For instance, MinGW can be used to compile Windows binaries on Linux. Here -L will point to the DLLs needed for linking and LD_LIBRARY_PATH will point to any libraries needed by linker to run. This allows compiling and linking of different architectures, OS ABIs, or processor types.
It's also helpful when trying to build special targets. I might be case that one links a static version of program against a different static library. This is the first step in Linux From Scratch, where one creates a separate mini-environment on the main system to become a chroot jail.
Setting LD_LIBRARY_PATH will affect all the commands you run to build your code (including the compiler itself).
That's not desirable in general (e.g. you might not want your compiler to run debug/instrumented libraries while it compiles - it might even go as far as breaking your compiles).
Use -L to tell the compiler where to look, LD_LIBRARY_PATH to influence runtime linking.
Building the binary and running the binary are two completely independent and unrelated processes. You seem to suggest that the running environment should affect the building environment, i.e. you seem to be making an assumption that the code build in some setup (account, machine) will be later run in the same setup. I find this assumption rather strange. I'd even say that in most cases the building and the running are done in different environments. I would actually prefer my compilers not to derive any assumptions about future running environment from the environment these compilers are invoked in. Looking onto the LD_LIBRARY_PATH of the building environment would be a major no-no.
The other answers are all good, but one nobody has mentioned yet is static libraries. Most of the time when you use -L it's with a static library built locally in your build tree that you don't intent to install, and it has nothing to do with LD_LIBRARY_PATH.
Compilers on Solaris support the -R /runtime/path/to/some/libs that adds to the path where libraries are to be searched by the run-time linker. On Linux the same could be achieved with -Wl,-rpath,/runtime/path/to/some/libs. It passes the -rpath /runtime/path/to/some/libs option to ld. GNU ld also supports the -R /path/to/libs for compatibility with other ELF linkers but this should be avoided as -R is normally used to specify symbol files to GNU ld.

CMake: how to produce binaries "as static as possible"

I would like to have control over the type of the libraries that get found/linked with my binaries in CMake. The final goal is, to generate binaries "as static as possible" that is to link statically against every library that does have a static version available. This is important as would enable portability of binaries across different systems during testing.
ATM this seems to be quite difficult to achieve as the FindXXX.cmake packages, or more precisely the find_library command always picks up the dynamic libraries whenever both static and dynamic are available.
Tips on how to implement this functionality - preferably in an elegant way - would be very welcome!
I did some investigation and although I could not find a satisfying solution to the problem, I did find a half-solution.
The problem of static builds boils down to 3 things:
Building and linking the project's internal libraries.
Pretty simple, one just has to flip the BUILD_SHARED_LIBS switch OFF.
Finding static versions of external libraries.
The only way seems to be setting CMAKE_FIND_LIBRARY_SUFFIXES to contain the desired file suffix(es) (it's a priority list).
This solution is quite a "dirty" one and very much against CMake's cross-platform aspirations. IMHO this should be handled behind the scenes by CMake, but as far as I understood, because of the ".lib" confusion on Windows, it seems that the CMake developers prefer the current implementation.
Linking statically against system libraries.
CMake provides an option LINK_SEARCH_END_STATIC which based on the documentation: "End a link line such that static system libraries are used."
One would think, this is it, the problem is solved. However, it seems that the current implementation is not up to the task. If the option is turned on, CMake generates a implicit linker call with an argument list that ends with the options passed to the linker, including -Wl,-Bstatic. However, this is not enough. Only instructing the linker to link statically results in an error, in my case: /usr/bin/ld: cannot find -lgcc_s. What is missing is telling gcc as well that we need static linking through the -static argument which is not generated to the linker call by CMake. I think this is a bug, but I haven't managed to get a confirmation from the developers yet.
Finally, I think all this could and should be done by CMake behind the scenes, after all it's not so complicated, except that it's impossible on Windows - if that count as complicated...
A well made FindXXX.cmake file will include something for this. If you look in FindBoost.cmake, you can set the Boost_USE_STATIC_LIBS variable to control whether or not it finds static or shared libraries. Unfortunately, a majority of packages do not implement this.
If a module uses the find_library command (most do), then you can change CMake's behavior through CMAKE_FIND_LIBRARY_SUFFIXES variable. Here's the relevant CMake code from FindBoost.cmake to use this:
IF(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .lib .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ELSE(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ENDIF(WIN32)
You can either put this before calling find_package, or, better, you can modify the .cmake files themselves and contribute back to the community.
For the .cmake files I use in my project, I keep all of them in their own folder within source control. I did this because I found that having the correct .cmake file for some libraries was inconsistent and keeping my own copy allowed me to make modifications and ensure that everyone who checked out the code would have the same build system files.

Resources