How to use cmake on the machine which cmake is not installed - c

I am using the cmake to build my project. However, I need to build this project on a machine that I do not have the permission to install any software on it. I thought I can use the generated makefile but it has the dependencies on CMake,and says cmake:command not found.Is there any solution that force the generated makefile do not have any cmake related command such as check the system version? Thanks

Is there any solution that force the generated makefile do not have any cmake related command such as check the system version?
No. There is no incentive for cmake to provide such an option, because the whole point of the cmake system is that the cmake program examines the build machine and uses what it finds to generate a Makefile (if you're using that generator) appropriate to the machine. The generated Makefiles are tailored to the machine, and it is not assumed that they would be suitable for any other machine, so there is no reason to suppose that one would need to use one on a machine that does not have cmake. In fact, if you look at the generated Makefiles you'll find all sorts of dependencies on cmake.
Depending on the breadth of your target machine types, you might consider the Autotools instead. Some people dislike them, and they're not a good choice if you want to support Microsoft's toolchain on Windows, but they do have the advantage that an Autotools-based build system can be used to build software on machines that do not themselves have the Autotools installed.

one easy solution is to use static libraries and the 'static' parameter in the command line.
Then you should be able to drop the executable on the target machine and run it.

Related

Creating a standalone, relocatable build of postgres

For a small project I'm working on, I would like to create a “relocatable build” of PostgreSQL, similar to the binaries here. The idea is that you have PostgreSQL and all required libraries packaged so that you can just unpack it in any directory on any machine and it will run. I want the resulting build of Postgres to work on virtually any Linux machine it finds itself on.
I've made it so far as determining which libaries I need to build:
My understanding is that I should be getting the source code for these libraries (and their dependencies) and compiling them statically.
As things stand currently, my build script is quite barebones and obviously produces an install that is linked against whatever distribution it was run on:
./configure \
--prefix="${outputDir}" \
--with-uuid="ossp"
I'm wondering if anyone could outline what steps I must take to get the relocatable build that I'm after. My hunch right now is that I'm looking for guidance on what environment variables I would need to set and/or parameters I'd need to provide to my build in order to end up with a fully relocatable build of Postgres.
Please note: I don't normally work with C/C++ although I have several years of ./configure, make and doing builds for other much higher level ecosystems under my belt. I'm well aware that distribution-specific releases of Postgres are widely available, to speak nothing of the official docker container. Please take the approach that I'm pursuing a concept in the spirit of research or exploration. I'm looking for a precise solution, not a fast one.
This answer is for Linux; this will work differently on different operating systems.
You can create a “relocatable build” of PostgreSQL if you build it with the appropriate “run path”. The documentation gives you these hints:
The method to set the shared library search path varies between platforms, but the most widely-used method is to set the environment variable LD_LIBRARY_PATH [...]
On some systems it might be preferable to set the environment variable LD_RUN_PATH before building.
The manual for ld tells you:
If -rpath is not used when linking an ELF executable, the contents > of the environment variable LD_RUN_PATH will be used if it is defined.
It also tells you this about the run path:
The tokens $ORIGIN and $LIB can appear in these search directories. They will be replaced by the full path to
the directory containing the program or shared object in the case of $ORIGIN and either lib - for 32-bit
binaries - or lib64 - for 64-bit binaries - in the case of $LIB.
See also this useful answer.
So the sequence of steps would be:
./configure --disable-rpath [other options]
export LD_RUN_PATH='$ORIGIN/../lib'
make
make install
Then you package the PostgreSQL binaries in the bin subdirectory and the shared libraries plus all required libraries (you can find them with ldd) in the lib subdirectory. The libraries will then be looked up relative to the binaries.

cmake and make build reproductibility

I'm evaluating the use of cmake to generate makefile for embedded FW. The cmakelists.txt will be shared in the team.
Can you confirm the makefile cannot be shared between different computers ?
Is this still true if project path is identical on both computers ?
Using cmake makefile generation and same version of compiler, will the generated binary be the same on all computers ?
Is this the same behavior as a makefile shared in the project ?
Why would you share a generated makefile anyway!? You usually share the cmake files.
Can you confirm the makefile cannot be shared between different computers ?
You should not share the makefile. It's generated only for you and includes local information as well as the cmake (cached) options and state. There's no serious reason to actually do this!
Is this still true if project path is identical on both computers ?
Yes, because cmake maintains a cache of settings, options etc. So the makefile may differ depending on paths, options and states. You also have to guarantee the paths for any dependency.
Using cmake makefile generation and same version of compiler, will the generated binary be the same on all computers ?
If environment (Compiler, libraries, …) and options (build type, project options, …) are same, Cmake will reliable produce exact the same binaries on all systems.
Is this the same behavior as a makefile shared in the project ?
No, CMake is much better: it's cross-platform. It doesn't depend on make, you can use any other system (like ninja or an IDE project) too – without touching your source or cmake code.
CMake does much more than just creating a makefile. You can even compile a CMake based project with several different compiler / cross-compiler without a single change.
TL;DR
Don't share generated Makfiles, share the Cmake sourcefiles instead – that's what CMake is used for.
In theory, if your project path, toolchain path, toolchain version, and the path to every external library used by the project is the same on different computers, then you can move the generated Makefile between them without regenerating. When you run make it might detect that things have changed and try to rerun CMake, though. I'm not sure why you'd want to do this, however, it seems like kind of bad practice.
If the compilers, libraries, and code are the same, the same code will get generated (unless your compiler has some sort of bug).

GNU configure options for binutils, gcc & glib

I am trying to build an alternative compilation suite on my debian-testing machine (sorry, real question is actually at bottom).
Technically it is a "cross-compilation" because I need to use this toolchain on another machine, but hardware is compatible (x86_64-unknown-linux-gnu) so I don't need to bother about build/host/target differencies.
On the other hand I do need to worry about prefix/sysroot because I cannot install in any standard location (to be more precise: I could install anywhere, since I have root access there, but I shouldn't); This leaves me with my $HOME, some completely non-standard place (e.g.: /usr/local/my/toolchain) or some semi-standard (e.g.: /opt) place. In any case I will need to do something to enable compilation to find includes and libs in such places and runtime linker to find needed .so.
My requirements are:
I have a running Linux that shouln't be messed with.
This system does not have a "C" compiler.
Said linux is BusyBox-based, so I will need a substantial amount of utilities to do any serious compiling there, including make, sed, awk, ..., beside the compiler proper.
I would be happy to stuff my augmented toolchain in /opt, but that is not a requirement; any place is ok as long as it's accessible by more than a single user, I would like ot avoid installing in $HOME.
I am aware of "optware", I installed it and it does work... up to a point. Unfortunately:
It's really old software
it's only 32bit (my system is Linux syno0 3.2.40 #5004 SMP Thu Nov 6 15:26:44 CST 2014 x86_64 GNU/Linux).
Some programs won't compile because provided libs have 32/64 mismatch.
Real motivation to do all this exercise is I need to install some perl modules needed for one application that will have to run there and to install them from cpan I need a native compiler (and other stuff, of course).
Similar arguments about a Ruby-on-rails application I should port there.
If at all possible I should try to use the "native" libs in /lib:/lib64:/usr/lib:/usr/lib64:/usr/lib32 ("static" .a libs are not available).
I had a limited success preparing a custom tarball from an available toolchain for my processor, relocating it to /opt, stuffing needed apps in its sysroot and compiling with: CPPFLAGS="-I/opt/include" and LDFLAGS="-L/opt/lib -Wl,-rpath -Wl,/opt/lib".
This enables me to build almost everything "LFS-style", but it's rather error-prone and 64-bit-only.
I seem to understand it should possible to automatize all this by a careful mix of --prefix, --with-sysroot, --with-native-system-header-dir, --enable-multilib and their friends.
I tried to understand exactly how they should be used and failed, for a reason or another. I didn't find any exhaustive documentation and information in GCC instalation docs are confusing me.
Can someone, please, give me a recipe to build this toolchain?
Any pointer to in-depth documentation welcome, but I suspect some tutoring will be necessary.
I assume recompilation of Binutils and GCC is mandatory, Glib is probably not needed; anything else can be recompiled "native" on target.
TiA
ZioByte
After installing your toolchain in nonstandard places you need to set environment(maybe system-wide) correctly for GCC using LIBRARY_PATH and C_INCLUDE_PATHor CPLUS_INCLUDE_PATH.
Environment Variables Affecting GCC
I see three ways to automate setting path variables for your relocatable toolchain:
on every relocation adding your GCC path to your PATH environment variable. And create alias in your busybox profile (usually /etc/profile)
alias example:
alias gcc='TOOLCHAIN_PREFIX=$(which gcc | rev | cut -d"/" -f3-10 |rev); \
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ gcc'
creating for your toolchain launcher-script that will calculate pathes, but you'll should launch it with direct path, setting it when you launch build process, or of course you can add its location to PATH environment varaible.
script example
#!/bin/sh
TOOLCHAIN_PREFIX=$(echo $0 | rev | cut -d"/" -f3-10 |rev);
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ \
$TOOLCHAIN_PREFIX/bin/gcc-4.*
The most reliable and ergonomic way — create install/uninstall script that will unpack and set environment correctly, to relocate toolchain you will uninstall from it from one prefix and install to another. If you have dpkg on your debian-testing system, .deb package is best choice.
I can see no way to set environment fully automatically. But we can reduce it to setting just one path — path of toolchain.
HINT* For better stability you should isolate your toolchain and also install in your prefix Linux Kernel headers and Glib

Cross build third-party library locations on Linux

Ive been cross compiling my unit-tests to ensure they pass on all the platforms of interest, e.g. x86-linux, win32, win64, arm-linux
they unit tests require the CUnit library
So I've had to cross compile that also for each platform
That comes with its own autoconf stuff so you can easily cross-build it by specifying --host for configure
The question I have is where is the 'correct' place to have the CUnit libs installed for the various platforms? i.e. what should I set --prefix to for configure?
My initial guess was:
/usr/local/<platform>/lib/Cunit
i.e. setting --prefix /usr/local/<platform>
e.g. --prefix /usr/local/arm-linux-gnueabihf
which on sudo make install gives you:
/usr/local/arm-linux-gnueabihf/doc/CUnit
/usr/local/arm-linux-gnueabihf/include/CUnit
/usr/local/arm-linux-gnueabihf/lib
/usr/local/arm-linux-gnueabihf/share/CUnit
Obviously, if i don't specify a prefix for configure, each platform build overwrites the prev one which is no good
to then successfully link to these platform specific libs i need to specify the relevant lib dir for each target in its own LDFLAGS in the Makefile
Is this the right approach? Have I got the dir structure/location right for this sort of cross-build stuff? I assume there must be a defacto approach but not sure what it is..
possibly configure is supposed to handle all this stuff for me? maybe I just have to set --target correctly and perhaps --enable-multilib? all with --prefix=/usr/local?
some of the error msgs i get suggest /usr/lib/gcc-cross might be involve?
From reading more about cross compilation and the Gnu configure and build system it seems that I should just be setting the --target option for the configure step
but how do you know what the target names are? are they some fragment of the cross compiler names?
The 3 cross compilers I am using are:
arm-linux-gnueabihf-gcc-4.8
i686-w64-mingw32-gcc
x86_64-w64-mingw32-gcc
allowing me to cross-compile for ARM, win32 and win64
my host is 32 bit ubuntu, which I think might be --host i386-linux, but it seems that configure should get this right as its default
This is the procedure I finally figured out and got to work:
for each of my 3 cross-build tools (arm, win32, win64) my calls to configure looked like:
./configure --host=arm-linux-gnueabihf --build=i686-pc-linux-gnu --prefix=/usr/local/arm-linux-gnueabihf
./configure --host=i686-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/i686-w64-mingw32
./configure --host=x86_64-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/x86_64-w64-mingw32
each of these was followed by make, sudo make install
prior to calling configure for the arm cross build i had to do:
ln -s /usr/bin/arm-linux-gnueabihf-gcc-4.8 /usr/bin/arm-linux-gnueabihf-gcc
this was because the compiler had -4.8 tagged on the end so configure could not correctly 'guess' the name of the compiler
this issue did not apply to either the win32 or win64 mingw compilers
Note an additional gotcha was that when subsequently trying to link to these cross compiled CUnit libs, none of the cross compilers seemed to look in /usr/local/include by default so I had to manually add:
-I/usr/local/include
for each object file build
e.g. i added /usr/local/include to INCLUDE_DIRS in my Makefile
all this finally seems to have given me correctly cross built CUnit libs and I have successfully linked to them to produce cross built unit test binaries for each of the target platforms.
not at all easy and I would venture to call the configure option settings 'counter-intuitive' - as ever it is worth taking the time to read the relevant docs - this snippet was pertinent:
There are three system names that the build knows about: the machine
you are building on (build), the machine that you are building for
(host), and the machine that GCC will produce code for (target). When
you configure GCC, you specify these with --build=, --host=, and
--target=.
Specifying the host without specifying the build should be avoided, as
configure may (and once did) assume that the host you specify is also
the build, which may not be true.
If build, host, and target are all the same, this is called a native.
If build and host are the same but target is different, this is called
a cross. If build, host, and target are all different this is called a
canadian (for obscure reasons dealing with Canada's political party
and the background of the person working on the build at that time).
If host and target are the same, but build is different, you are using
a cross-compiler to build a native for a different system. Some people
call this a host-x-host, crossed native, or cross-built native.
and also:
When people configure a project like './configure', man often meets
these three confusing options, which are more related with
cross-compilation
--host: In which system the generated program will run.
--build: In which system the program will be built.
--target: this option is only used to build a cross-compiling
toolchain. When the tool chain generates executable program, in which target
system the program will run.
An example of tslib (a mouse driver library)
'./configure --host=arm-linux --build=i686-pc-linux-gnu': the
dynamically library is built on a x86 linux computer but will be used
for a embedded arm linux system.

Install Clang as User (no Root Privileges)?

I have access to a shell account at University as a user but with no root privileges. The server is running Ubuntu 8.04 - Hardy. I wish to use Clang as my C compiler for next semester's Unix programming course. GCC is installed but not Clang, and the University's IT dept has, as expected, declined to install Clang on the system.
Is it possible to run Clang from my home directory as user? Presumably I would need to compile from source. I need it to compile only C. I don't need C++ or Obj C for this course.
You can use the autotools installation method by running ./configure --prefix=$HOME (or some subdirectory of your home directory if you prefer) or by using the CMake build and installation with the CMAKE_INSTALL_PREFIX set to some directory under your home. The former is documented here, merely add the --prefix flag to the configure step, and run 'make install' at the end.
Once installed, put the bin subdirectory of whatever prefix you used into your PATH environment variable, and you should be good-to-go. This is actually the way I use Clang regularly as a developer of Clang and LLVM.
For reference, this is definitely a mode of installation and use that we (Clang developers) want to support. If you run into issues, don't hesitate to file bugs or reach out for support on our email lists or IRC channel (#llvm on irc.oftc.net).
With free software, you can always configure it and (if needed) patch and improve it to suit your needs. However, building a compiler (be it GCC or Clang) requires a lot of resources (disk space, several gigabytes, and also RAM & CPU time), and some of your time and efforts.
Clang building and installation is documented here. I guess that its configure script -assuming it is similar to GCC's one- accepts arguments like --prefix (which you could e.g. set to $HOME/pub). You might need to build also the required dependencies.
As the project appears to use autotools you can alter the installation destination with command line parameters to the configure program (e.g. --prefix=$HOME/clang). Running ./configure --help and reading the INSTALL text file will give you more details.
If not already installed, you also need to build LLVM, which is the parent project (Low Level Virtual Machine) as well. Installation instructions for both are available at the clang website.

Resources