cmake and make build reproductibility - c

I'm evaluating the use of cmake to generate makefile for embedded FW. The cmakelists.txt will be shared in the team.
Can you confirm the makefile cannot be shared between different computers ?
Is this still true if project path is identical on both computers ?
Using cmake makefile generation and same version of compiler, will the generated binary be the same on all computers ?
Is this the same behavior as a makefile shared in the project ?

Why would you share a generated makefile anyway!? You usually share the cmake files.
Can you confirm the makefile cannot be shared between different computers ?
You should not share the makefile. It's generated only for you and includes local information as well as the cmake (cached) options and state. There's no serious reason to actually do this!
Is this still true if project path is identical on both computers ?
Yes, because cmake maintains a cache of settings, options etc. So the makefile may differ depending on paths, options and states. You also have to guarantee the paths for any dependency.
Using cmake makefile generation and same version of compiler, will the generated binary be the same on all computers ?
If environment (Compiler, libraries, …) and options (build type, project options, …) are same, Cmake will reliable produce exact the same binaries on all systems.
Is this the same behavior as a makefile shared in the project ?
No, CMake is much better: it's cross-platform. It doesn't depend on make, you can use any other system (like ninja or an IDE project) too – without touching your source or cmake code.
CMake does much more than just creating a makefile. You can even compile a CMake based project with several different compiler / cross-compiler without a single change.
TL;DR
Don't share generated Makfiles, share the Cmake sourcefiles instead – that's what CMake is used for.

In theory, if your project path, toolchain path, toolchain version, and the path to every external library used by the project is the same on different computers, then you can move the generated Makefile between them without regenerating. When you run make it might detect that things have changed and try to rerun CMake, though. I'm not sure why you'd want to do this, however, it seems like kind of bad practice.
If the compilers, libraries, and code are the same, the same code will get generated (unless your compiler has some sort of bug).

Related

Creating a standalone, relocatable build of postgres

For a small project I'm working on, I would like to create a “relocatable build” of PostgreSQL, similar to the binaries here. The idea is that you have PostgreSQL and all required libraries packaged so that you can just unpack it in any directory on any machine and it will run. I want the resulting build of Postgres to work on virtually any Linux machine it finds itself on.
I've made it so far as determining which libaries I need to build:
My understanding is that I should be getting the source code for these libraries (and their dependencies) and compiling them statically.
As things stand currently, my build script is quite barebones and obviously produces an install that is linked against whatever distribution it was run on:
./configure \
--prefix="${outputDir}" \
--with-uuid="ossp"
I'm wondering if anyone could outline what steps I must take to get the relocatable build that I'm after. My hunch right now is that I'm looking for guidance on what environment variables I would need to set and/or parameters I'd need to provide to my build in order to end up with a fully relocatable build of Postgres.
Please note: I don't normally work with C/C++ although I have several years of ./configure, make and doing builds for other much higher level ecosystems under my belt. I'm well aware that distribution-specific releases of Postgres are widely available, to speak nothing of the official docker container. Please take the approach that I'm pursuing a concept in the spirit of research or exploration. I'm looking for a precise solution, not a fast one.
This answer is for Linux; this will work differently on different operating systems.
You can create a “relocatable build” of PostgreSQL if you build it with the appropriate “run path”. The documentation gives you these hints:
The method to set the shared library search path varies between platforms, but the most widely-used method is to set the environment variable LD_LIBRARY_PATH [...]
On some systems it might be preferable to set the environment variable LD_RUN_PATH before building.
The manual for ld tells you:
If -rpath is not used when linking an ELF executable, the contents > of the environment variable LD_RUN_PATH will be used if it is defined.
It also tells you this about the run path:
The tokens $ORIGIN and $LIB can appear in these search directories. They will be replaced by the full path to
the directory containing the program or shared object in the case of $ORIGIN and either lib - for 32-bit
binaries - or lib64 - for 64-bit binaries - in the case of $LIB.
See also this useful answer.
So the sequence of steps would be:
./configure --disable-rpath [other options]
export LD_RUN_PATH='$ORIGIN/../lib'
make
make install
Then you package the PostgreSQL binaries in the bin subdirectory and the shared libraries plus all required libraries (you can find them with ldd) in the lib subdirectory. The libraries will then be looked up relative to the binaries.

Compile entire C project instead of few files

I have an entire library made in C. It has almost 10 folders with a lot of files.
I have created a filename.c file in root folder and trying to compile it in mac using gcc test.c -o test however its not including header files. Generally I have to add all the header files gcc test.c libaudio.c -o test
How can I compile entire project instead of just one file.
Makefiles will solve your problem. You can create your own rules to clear the project (remove the generated files), build the project indicating where is your compiler (compile the source files located in some specific path, extension, etc), set the output path and so on, without typing a large compilation order.
https://www.gnu.org/software/make/manual/make.html
Edit: There you will be able to find how to add shared, static or raw libraries to your proyect through makefiles.
Use a Makefile. make the utility the reads the configuration within the Makefile will automate the running of the individual commands, such that you only need to name the item you wish to be rebuilt.
make myprogram
And make will use the dependency information stored in the Makefile's rules to determine what other elements are "out of date", rebuilding those and assembling them into myprogram.
This is a decent "first time" tutorial for "make".
Here is the full blown documentation for "make"
Once you master the concepts within make, you can then use other tools that make maintaining Makefiles either easier, more portable, or both.
Some tools that improve upon "make" include "cmake", "automake", "the autotools collection", "scons", "waf", "rake", "doit", "ninja", "tup", "redo", and "sake". There are more, and some are programming language specific, or limited to a particular enviornment.
The reason I recommend "make" over the others is because "make" is a baseline that will always be present, and the features in the other tools are often not understood or recognized to be needed until you get enough experience with "make".
In C, the concept of project is not part of the language, it depends generally of the tools / platform / library you have to build.
On Linux based platforms, you may have a makefile describing the project, or the library may have a cmake script.
You should be able to find the build instructions in you library documentation.
I definitely recommend the make approach as it is scalable.
If you really only have a couple of files, gcc will accept multiple .c files on the command line and link them all to generate one executable.

How to use cmake on the machine which cmake is not installed

I am using the cmake to build my project. However, I need to build this project on a machine that I do not have the permission to install any software on it. I thought I can use the generated makefile but it has the dependencies on CMake,and says cmake:command not found.Is there any solution that force the generated makefile do not have any cmake related command such as check the system version? Thanks
Is there any solution that force the generated makefile do not have any cmake related command such as check the system version?
No. There is no incentive for cmake to provide such an option, because the whole point of the cmake system is that the cmake program examines the build machine and uses what it finds to generate a Makefile (if you're using that generator) appropriate to the machine. The generated Makefiles are tailored to the machine, and it is not assumed that they would be suitable for any other machine, so there is no reason to suppose that one would need to use one on a machine that does not have cmake. In fact, if you look at the generated Makefiles you'll find all sorts of dependencies on cmake.
Depending on the breadth of your target machine types, you might consider the Autotools instead. Some people dislike them, and they're not a good choice if you want to support Microsoft's toolchain on Windows, but they do have the advantage that an Autotools-based build system can be used to build software on machines that do not themselves have the Autotools installed.
one easy solution is to use static libraries and the 'static' parameter in the command line.
Then you should be able to drop the executable on the target machine and run it.

How to work with external libraries when cross compiling?

I am writing some code for raspberry pi ARM target on x86 ubuntu machine. I am using the gcc-linaro-armhf toolchain. I am able to cross compile and run some independent programs on pi. Now, I want to link my code with external library such as ncurses. How can I achieve this.
Should I just link my program with the existing ncurses lib on host machine and then run on ARM? (I don't think this will work)
Do I need to get source or prebuilt version of lib for arm, put it in my lib path and then compile?
What is the best practice in this kind of situation?
I also want to know how it works for the c stdlib. In my program I used the stdio functions and it worked after cross compiling without doing anything special. I just provided path for my arm gcc in makefile. So, I want to know, how it got correct std headers and libs?
Regarding your general questions:
Why the C library works:
The C library is part of your cross toolchain. That's why the headers are found and the program correctly links and runs. This is also true for some other very basic system libraries like libm and libstdc++ (not in every case, depends on the toolchain configuration).
In general when dealing with cross-development you need some way to get your desired libraries cross-compiled. Using binaries in this case is very rare. That is, especially with ARM hardware, because there are so many different configurations and often everything is stripped down much in different ways. That's why binaries are not very much binary compatible between different devices and Linux configurations.
If you're running Ubuntu on the Raspberry Pi then there is a chance that you may find a suitable ncurses library on the internet or even in some Ubuntu apt repository. The typical way, however, will be to cross compile the library with the specific toolchain you have got.
In cases when a lot and complex libraries need to be cross-compiled there are solutions that make life a bit easier like buildroot or ptxdist. These programs build complete Linux kernels and root file systems for embedded devices.
In your case, however, as long as you only want ncurses you can compile the source code yourself. You just need to download the sources, run configure while specifying your toolchain using the --host option. The --prefix option will choose the installation directory. After running make and make install, considering everything went fine, you will have got a set of headers and the ARM-compiled library for your application to link against.
Regarding cross compilation you will surely find loads of information on the internet and maybe ncurses has got some pointers in its shipped documentation, too.
For the query How the C library works in cross-tools
When compiling and building cross-tool chain during configuration they will provide sysroot.
like --with-sysroot=${CLFS_CROSS_TOOLS}
--with-sysroot
--with-sysroot=dir
Tells GCC to consider dir as the root of a tree that contains (a subset of) the root filesystem of the target operating system. Target system headers, libraries and run-time object files will be searched for in there. More specifically, this acts as if --sysroot=dir was added to the default options of the built compiler. The specified directory is not copied into the install tree, unlike the options --with-headers and --with-libs that this option obsoletes. The default value, in case --with-sysroot is not given an argument, is ${gcc_tooldir}/sys-root. If the specified directory is a subdirectory of ${exec_prefix}, then it will be found relative to the GCC binaries if the installation tree is moved.
So instead of looking /lib /usr/include it will look /Toolchain/(libc) and (include files) when its compiling
you can check by
arm-linux-gnueabihf-gcc -print-sysroot
this show where to look for libc .
also
arm-linux-gnueabihf-gcc -print-search-dirs
gives you clear picture
Clearly, you will need an ncurses compiled for the ARM that you are targeting - the one on the host will do you absolutely no good at all [unless your host has an ARM processor - but you said x86, so clearly not the case].
There MAY be some prebuilt libraries available, but I suspect it's more work to find one (that works and matches your specific conditions) than to build the library yourself from sources - it shouldn't be that hard, and I expect ncurses doesn't take that many minutes to build.
As to your first question, if you intend to use ncurses library with your cross-compiler toolchain, you'll have its arm-built binaries prepared.
Your second question is how it works with std libs, well it's really NOT the system libc/libm the toolchain is using to compile/link your program is. Maybe you'll see it from --print-file-name= option of your compiler:
arm-none-linux-gnuabi-gcc --print-file-name=libm.a
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libm.a
arm-none-linux-gnuabi-gcc --print-file-name=libpthread.so
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libpthread.so
I think your Raspberry toolchain might be the same. You can try this out.
Vinay's answer is pretty solid. Just a correction when compiling the ncurses library for raspberry pi the option to set your rootfs is --sysroot=<dir> and not --with-sysroot . Thats what I found when I was using the following compiler:
arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) 4.8.3 20140303 (prerelease)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Do I really need to specify library location for linking with automake?

I am working on a multi-platform C program. The makefile has become pretty complicated because of all the different compilers, library locations, etc. on each machine. I figured Autoconf/Automake would be a good solution, though my previous experience using those tools was minimal.
My Makefile.am has the line LIBS=-lX11, but linking fails with the error "/usr/bin/ld: cannot find -lX11". I can work around this by adding "-L/usr/X11R6/lib/" to the definition of LIBS, but should I really need to do that? When I run ./configure, it says:
checking for X... libraries /usr/X11R6/lib, headers /usr/X11R6/include
So it seems like Automake should know where to find it. Is there a way I can reference its location in Makefile.am without having to hardcode it, since the X11 libs will be in a different place on each machine?
Your Makefile.am should not set LIBS. If you need to link with a library, configure.ac should include a check for the library and the configure script will set LIBS accordingly. Also, Makefile.am should not specify paths to libraries. Each system should be configured so that the precompiler can find the headers and the linker can find the libraries. If the system is not set up so that the linker can find a library, the correct solution is for the user to specify the location in LDFLAGS rather than hard coding something in Makefile.am. (eg, rather than adding -L/p/a/t/h to a Makefile, you should add LDFLAGS=-L/p/a/t/h to the invocation of configure.)

Resources