I would like to have control over the type of the libraries that get found/linked with my binaries in CMake. The final goal is, to generate binaries "as static as possible" that is to link statically against every library that does have a static version available. This is important as would enable portability of binaries across different systems during testing.
ATM this seems to be quite difficult to achieve as the FindXXX.cmake packages, or more precisely the find_library command always picks up the dynamic libraries whenever both static and dynamic are available.
Tips on how to implement this functionality - preferably in an elegant way - would be very welcome!
I did some investigation and although I could not find a satisfying solution to the problem, I did find a half-solution.
The problem of static builds boils down to 3 things:
Building and linking the project's internal libraries.
Pretty simple, one just has to flip the BUILD_SHARED_LIBS switch OFF.
Finding static versions of external libraries.
The only way seems to be setting CMAKE_FIND_LIBRARY_SUFFIXES to contain the desired file suffix(es) (it's a priority list).
This solution is quite a "dirty" one and very much against CMake's cross-platform aspirations. IMHO this should be handled behind the scenes by CMake, but as far as I understood, because of the ".lib" confusion on Windows, it seems that the CMake developers prefer the current implementation.
Linking statically against system libraries.
CMake provides an option LINK_SEARCH_END_STATIC which based on the documentation: "End a link line such that static system libraries are used."
One would think, this is it, the problem is solved. However, it seems that the current implementation is not up to the task. If the option is turned on, CMake generates a implicit linker call with an argument list that ends with the options passed to the linker, including -Wl,-Bstatic. However, this is not enough. Only instructing the linker to link statically results in an error, in my case: /usr/bin/ld: cannot find -lgcc_s. What is missing is telling gcc as well that we need static linking through the -static argument which is not generated to the linker call by CMake. I think this is a bug, but I haven't managed to get a confirmation from the developers yet.
Finally, I think all this could and should be done by CMake behind the scenes, after all it's not so complicated, except that it's impossible on Windows - if that count as complicated...
A well made FindXXX.cmake file will include something for this. If you look in FindBoost.cmake, you can set the Boost_USE_STATIC_LIBS variable to control whether or not it finds static or shared libraries. Unfortunately, a majority of packages do not implement this.
If a module uses the find_library command (most do), then you can change CMake's behavior through CMAKE_FIND_LIBRARY_SUFFIXES variable. Here's the relevant CMake code from FindBoost.cmake to use this:
IF(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .lib .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ELSE(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ENDIF(WIN32)
You can either put this before calling find_package, or, better, you can modify the .cmake files themselves and contribute back to the community.
For the .cmake files I use in my project, I keep all of them in their own folder within source control. I did this because I found that having the correct .cmake file for some libraries was inconsistent and keeping my own copy allowed me to make modifications and ensure that everyone who checked out the code would have the same build system files.
Related
Coming from programming environments that support package managers, I experience a lot of discomfort installing and using libraries not included in the default project.
For example, #include <threads.h> triggers an error threads.h file not found. I found that the compiler looks for header files in /Library/Developer/CommandLineTools/usr/include/c++/v1 by issuing gcc -print-prog-name=cpp -v. I am not sure if this a complete folder list? How do I find the ones that it doesn't find by default? I am on OSX, but Windows solution is also desired.
The question doesn't really say whether you are building your own project, or someone else's, and whether you use an IDE or some build system. I'll try to give a generic answer suitable for most scenarios.
But first, it's header files, not libraries (which are a different kind of pain, by the way). You need to explicitly make them available to the compiler, unless they reside on a standard search path. Alas, it's a lot of manual work sometimes, especially when you need to build a third-party project with a ton of dependencies.
I am not sure if this a complete folder list?
Figuring out the standard include paths of your compiler can be tricky. Here's one question that has some hints: What are the GCC default include directories?
How do I find the ones that it doesn't find by default?
They may or may not be present on your machine. If they are, you'll have to find out where they are located. Otherwise you have to figure out what library they belong to, then download and unpack (and probably build) it. Either way, you will have to specify the path to that library's header files in your IDE (or Makefile, or whatever you use). Oh, and you need to make sure that the library version matches the version required by the project. Fun!
On macOS you can use third-party package managers (e.g. brew) to handle library installation for you.
pkg-config is not available on macOS, unless you install it from a third-party source.
If you are building your own project, a somewhat better solution is to use CMake and its find_package command. However, only libraries supported by CMake can be discovered this way. Fortunately, their collection of supported libraries is quite extensive, and you can make your own find_package scripts. Moreover, CMake is cross-platform, and it can handle versioning for you.
It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).
Configuration
I use autotools (autoreconf -iv and ./configure) to generate correct makefiles. On my development machine (Fedora) everything works correctly. For make check I use the library libcheck and from autotools I use Libtools. On Fedora the library for check is dynamic: libcheck.so.0.0.0 or something such. It works.
The Issue
When I push the commits to my repo on github and do a pull request, the result is tested on Travis CI which uses Ubuntu as a platform. Now on Ubuntu the libcheck is a static library: libcheck.a and a libcheck_pic.a.
When Travis does a make check I get the following error message:
/usr/bin/ld: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../..
/../libcheck.a(check.o): relocation R_X86_64_32 against.rodata.str1.1' can
not be used when making a shared object; recompile with -fPIC`
/usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../../../libcheck.a: could not read
symbols: Bad value
Which means I have to somehow let configure determine what library I need. I suspect I need libcheck_pic.a for Ubuntu and the regular libcheck.so for Fedora.
The question
Does anyone know how to integrate this into configure.ac and test/Makefile.am using libtool? I would prefer to stay in line with autotools way of life.
I couldn't find usable information using google, but there is a lot of questions out there about what the difference is between static and dynamic (which is not what I need).
I would much appreciate if anyone could point me in the right direction, or maybe even solved this already?
I suspect you're right that the library you want to use on the CI system is libcheck_pic.a, for its name suggests that the routines within are compiled as position-independent code, exactly as the error message you receive suggests that you do.
One way to approach the problem, then, would be to use libcheck_pic if it is available, and to fall back to plain libcheck otherwise. That's not too hard to configure your Autotools-based build system to do. You then record the appropriate library name in an output variable, and use that in your (auto)make files.
Autoconf's SEARCH_LIBS macro specifically serves this kind of prioritized library search requirement, but it has the side effect, likely unwanted in this case, of modifying the LIBS variable. You can nevertheless make it work. Something like this might do it, for example:
LIBS_save=$LIBS
AC_SEARCH_LIBS([ck_assert], [check_pic check], [
# Optional: add a test to verify that the chosen lib really provides PIC code.
# Set LIBCHECK to the initial substring of $LIBS up to but excluding the first space.
LIBCHECK=${LIBS%% *}
], [
# or maybe make it a warning, and disable your test suite when libcheck
# is not available.
AC_MSG_ERROR([A PIC version of libcheck is required])
])
AC_OUTPUT([LIBCHECK])
LIBS=$LIBS_save
I presume you know what to do with $(LIBCHECK) on the Make side.
As written, that has the limitation that if there is no PIC version of libcheck available then you won't find out until make or maybe make check. That's undesirable, and you could add Autoconf code to detect that situation if it is undesirable enough.
As an altogether different approach, you could consider building your tests statically (add -static to the appropriate *_LDFLAGS variable). Of course, this has the opposite problem: if a static version of the library is not available, then the build or tests fail. Also, it requires building a static version of your own code if you're not doing so already, and furthermore, it is the static version that will be tested.
For greatest flexibility, you could consider combining those two approaches. You might set up a fallback from one to the other, or you might set up separate targets for testing static code and PIC code, and exercise whichever of them (possibly both) is supported by the libraries available on the build system.
When a custom library is used in the code, it requires -l linker parameter to use:
gcc myprogram.c -lmylibrary
Is there a way to convince MinGW linker to check header files and automatically find and link a library in /lib folder? Or is there a reason why it would be a bad idea?
No.
The problem of looking at C source code and figuring out which libraries it uses is very hard. It feels kind of "AI complete" to me, which is why it's typically solved manually by the programmer pointing out the exact right libraries to satisfy the dependencies with.
Imagine for mylibrary, it's easy to imagine a system with both mylibrary 1.x and 2.x versions installed, and some calls are named exactly the same. Now try to imagine a computer program capable of deducing what you meant, which library to link with. It's not possible, since only the programmer knows.
The pkg-config tool helps with the mechanics of what each library requires in order to be used, but it's still up to you to tell it (via the module name argument(s)) which exact libraries to use.
On Windows, it's more or less common to create "proxy DLLs" which take place of the original DLL and forward calls to it (after any additional actions as needed). You can read about it here and here for example.
However, shlib munging culture under Linux is quite different. It starts with the fact that LD_PRELOAD is the builtin feature with ld.so under Linux, which simply injects separate shlib into process and uses any symbols it defines as override. And that "injection" technique seems to define whole direction of thought - here's a typical ELF hacking tool or this question, where gentleman seems to have the same usecase as me, but starts with asking how he can patch existing binaries.
No, thanks. I don't want to inject into or modify something which is nor mine. All I want to do is to make a standalone proxy shlib which will call out to the original. Ideally, there would be a tool which can be fed with the original .so and create a C source code which would just redirect to original's functions, while letting me easily override anything I want. So, where's such tool? ;-) Thanks.
Using LD_PRELOAD doesn't really involve modifying something which isn't yours, and the injection isn't all that different from normal dynamic library loading. The “typical ELF hacking tool” from the ERESI project is unrelated to LD_PRELOAD. You should not be afraid of it. A good introduction to writing LD_PRELOAD-able “proxies” is here.
That being said, if you want to create a system-wide proxy for some library, you might argue that globally setting LD_PRELOAD (and thus loading your proxy into every binary that ever runs on your system) is undesirable. It is commonly used to override functions from glibc by tools such as libeatmydata or socksify, but if you're overriding a function in a library that is bigger and/or less widespread than glibc, it makes sense to try to find another approach, to really create a proxy for just that one library.
One such approach is to use patchelf --replace-needed or --add-needed to hardcode the full pathname of the original library and then make sure the proxy library is found first by setting LD_LIBRARY_PATH¹. So, the complete procedure is:
create an LD_PRELOAD-able library that overrides some functions of the original one (test that it works using only LD_PRELOAD before proceeding further!)
compile and link this library with the original library so that ldd libwrapper-foo.so includes something like:
libfoo.so.0 => /usr/lib/x86_64-linux-gnu/libfoo.so.0 (0x0000deadbeef0000)
hardcode the full path using patchelf:
patchelf --replace-needed libfoo.so.0 /usr/lib/x86_64-linux-gnu/libfoo.so.0 libwrapper-foo.so
symlink libwrapper-foo.so to libfoo.so.0
now LD_LIBRARY_PATH=. ldd $(which program-that-uses-libfoo) should include these lines:
libfoo.so.0 => ./libfoo.so.0 (0x0000dead56780000)
/usr/lib/x86_64-linux-gnu/libfoo.so.0 (0x0000dead1234000000)
set LD_LIBRARY_PATH to full path to the wrapper library in your .bashrc or somewhere
A real-life example of such proxy libary is my wrapper for libpango that enables subpixel positioning for all applications.
¹) It might also be possible to put this proxy library into /usr/local/lib, but ldconfig (the tool that updates shared libraries cache) refuses to use libraries with hardcoded absolute paths.
apitrace is a tool which covers detailed tracing of graphic libs (OpenGL, DirectX) calls for a number of platform. It's probably too detailed and complex for generic solution, but at least provides some reference and affinity.