Install Clang as User (no Root Privileges)? - c

I have access to a shell account at University as a user but with no root privileges. The server is running Ubuntu 8.04 - Hardy. I wish to use Clang as my C compiler for next semester's Unix programming course. GCC is installed but not Clang, and the University's IT dept has, as expected, declined to install Clang on the system.
Is it possible to run Clang from my home directory as user? Presumably I would need to compile from source. I need it to compile only C. I don't need C++ or Obj C for this course.

You can use the autotools installation method by running ./configure --prefix=$HOME (or some subdirectory of your home directory if you prefer) or by using the CMake build and installation with the CMAKE_INSTALL_PREFIX set to some directory under your home. The former is documented here, merely add the --prefix flag to the configure step, and run 'make install' at the end.
Once installed, put the bin subdirectory of whatever prefix you used into your PATH environment variable, and you should be good-to-go. This is actually the way I use Clang regularly as a developer of Clang and LLVM.
For reference, this is definitely a mode of installation and use that we (Clang developers) want to support. If you run into issues, don't hesitate to file bugs or reach out for support on our email lists or IRC channel (#llvm on irc.oftc.net).

With free software, you can always configure it and (if needed) patch and improve it to suit your needs. However, building a compiler (be it GCC or Clang) requires a lot of resources (disk space, several gigabytes, and also RAM & CPU time), and some of your time and efforts.
Clang building and installation is documented here. I guess that its configure script -assuming it is similar to GCC's one- accepts arguments like --prefix (which you could e.g. set to $HOME/pub). You might need to build also the required dependencies.

As the project appears to use autotools you can alter the installation destination with command line parameters to the configure program (e.g. --prefix=$HOME/clang). Running ./configure --help and reading the INSTALL text file will give you more details.
If not already installed, you also need to build LLVM, which is the parent project (Low Level Virtual Machine) as well. Installation instructions for both are available at the clang website.

Related

How to use cmake on the machine which cmake is not installed

I am using the cmake to build my project. However, I need to build this project on a machine that I do not have the permission to install any software on it. I thought I can use the generated makefile but it has the dependencies on CMake,and says cmake:command not found.Is there any solution that force the generated makefile do not have any cmake related command such as check the system version? Thanks
Is there any solution that force the generated makefile do not have any cmake related command such as check the system version?
No. There is no incentive for cmake to provide such an option, because the whole point of the cmake system is that the cmake program examines the build machine and uses what it finds to generate a Makefile (if you're using that generator) appropriate to the machine. The generated Makefiles are tailored to the machine, and it is not assumed that they would be suitable for any other machine, so there is no reason to suppose that one would need to use one on a machine that does not have cmake. In fact, if you look at the generated Makefiles you'll find all sorts of dependencies on cmake.
Depending on the breadth of your target machine types, you might consider the Autotools instead. Some people dislike them, and they're not a good choice if you want to support Microsoft's toolchain on Windows, but they do have the advantage that an Autotools-based build system can be used to build software on machines that do not themselves have the Autotools installed.
one easy solution is to use static libraries and the 'static' parameter in the command line.
Then you should be able to drop the executable on the target machine and run it.

GNU configure options for binutils, gcc & glib

I am trying to build an alternative compilation suite on my debian-testing machine (sorry, real question is actually at bottom).
Technically it is a "cross-compilation" because I need to use this toolchain on another machine, but hardware is compatible (x86_64-unknown-linux-gnu) so I don't need to bother about build/host/target differencies.
On the other hand I do need to worry about prefix/sysroot because I cannot install in any standard location (to be more precise: I could install anywhere, since I have root access there, but I shouldn't); This leaves me with my $HOME, some completely non-standard place (e.g.: /usr/local/my/toolchain) or some semi-standard (e.g.: /opt) place. In any case I will need to do something to enable compilation to find includes and libs in such places and runtime linker to find needed .so.
My requirements are:
I have a running Linux that shouln't be messed with.
This system does not have a "C" compiler.
Said linux is BusyBox-based, so I will need a substantial amount of utilities to do any serious compiling there, including make, sed, awk, ..., beside the compiler proper.
I would be happy to stuff my augmented toolchain in /opt, but that is not a requirement; any place is ok as long as it's accessible by more than a single user, I would like ot avoid installing in $HOME.
I am aware of "optware", I installed it and it does work... up to a point. Unfortunately:
It's really old software
it's only 32bit (my system is Linux syno0 3.2.40 #5004 SMP Thu Nov 6 15:26:44 CST 2014 x86_64 GNU/Linux).
Some programs won't compile because provided libs have 32/64 mismatch.
Real motivation to do all this exercise is I need to install some perl modules needed for one application that will have to run there and to install them from cpan I need a native compiler (and other stuff, of course).
Similar arguments about a Ruby-on-rails application I should port there.
If at all possible I should try to use the "native" libs in /lib:/lib64:/usr/lib:/usr/lib64:/usr/lib32 ("static" .a libs are not available).
I had a limited success preparing a custom tarball from an available toolchain for my processor, relocating it to /opt, stuffing needed apps in its sysroot and compiling with: CPPFLAGS="-I/opt/include" and LDFLAGS="-L/opt/lib -Wl,-rpath -Wl,/opt/lib".
This enables me to build almost everything "LFS-style", but it's rather error-prone and 64-bit-only.
I seem to understand it should possible to automatize all this by a careful mix of --prefix, --with-sysroot, --with-native-system-header-dir, --enable-multilib and their friends.
I tried to understand exactly how they should be used and failed, for a reason or another. I didn't find any exhaustive documentation and information in GCC instalation docs are confusing me.
Can someone, please, give me a recipe to build this toolchain?
Any pointer to in-depth documentation welcome, but I suspect some tutoring will be necessary.
I assume recompilation of Binutils and GCC is mandatory, Glib is probably not needed; anything else can be recompiled "native" on target.
TiA
ZioByte
After installing your toolchain in nonstandard places you need to set environment(maybe system-wide) correctly for GCC using LIBRARY_PATH and C_INCLUDE_PATHor CPLUS_INCLUDE_PATH.
Environment Variables Affecting GCC
I see three ways to automate setting path variables for your relocatable toolchain:
on every relocation adding your GCC path to your PATH environment variable. And create alias in your busybox profile (usually /etc/profile)
alias example:
alias gcc='TOOLCHAIN_PREFIX=$(which gcc | rev | cut -d"/" -f3-10 |rev); \
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ gcc'
creating for your toolchain launcher-script that will calculate pathes, but you'll should launch it with direct path, setting it when you launch build process, or of course you can add its location to PATH environment varaible.
script example
#!/bin/sh
TOOLCHAIN_PREFIX=$(echo $0 | rev | cut -d"/" -f3-10 |rev);
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ \
$TOOLCHAIN_PREFIX/bin/gcc-4.*
The most reliable and ergonomic way — create install/uninstall script that will unpack and set environment correctly, to relocate toolchain you will uninstall from it from one prefix and install to another. If you have dpkg on your debian-testing system, .deb package is best choice.
I can see no way to set environment fully automatically. But we can reduce it to setting just one path — path of toolchain.
HINT* For better stability you should isolate your toolchain and also install in your prefix Linux Kernel headers and Glib

Cross build third-party library locations on Linux

Ive been cross compiling my unit-tests to ensure they pass on all the platforms of interest, e.g. x86-linux, win32, win64, arm-linux
they unit tests require the CUnit library
So I've had to cross compile that also for each platform
That comes with its own autoconf stuff so you can easily cross-build it by specifying --host for configure
The question I have is where is the 'correct' place to have the CUnit libs installed for the various platforms? i.e. what should I set --prefix to for configure?
My initial guess was:
/usr/local/<platform>/lib/Cunit
i.e. setting --prefix /usr/local/<platform>
e.g. --prefix /usr/local/arm-linux-gnueabihf
which on sudo make install gives you:
/usr/local/arm-linux-gnueabihf/doc/CUnit
/usr/local/arm-linux-gnueabihf/include/CUnit
/usr/local/arm-linux-gnueabihf/lib
/usr/local/arm-linux-gnueabihf/share/CUnit
Obviously, if i don't specify a prefix for configure, each platform build overwrites the prev one which is no good
to then successfully link to these platform specific libs i need to specify the relevant lib dir for each target in its own LDFLAGS in the Makefile
Is this the right approach? Have I got the dir structure/location right for this sort of cross-build stuff? I assume there must be a defacto approach but not sure what it is..
possibly configure is supposed to handle all this stuff for me? maybe I just have to set --target correctly and perhaps --enable-multilib? all with --prefix=/usr/local?
some of the error msgs i get suggest /usr/lib/gcc-cross might be involve?
From reading more about cross compilation and the Gnu configure and build system it seems that I should just be setting the --target option for the configure step
but how do you know what the target names are? are they some fragment of the cross compiler names?
The 3 cross compilers I am using are:
arm-linux-gnueabihf-gcc-4.8
i686-w64-mingw32-gcc
x86_64-w64-mingw32-gcc
allowing me to cross-compile for ARM, win32 and win64
my host is 32 bit ubuntu, which I think might be --host i386-linux, but it seems that configure should get this right as its default
This is the procedure I finally figured out and got to work:
for each of my 3 cross-build tools (arm, win32, win64) my calls to configure looked like:
./configure --host=arm-linux-gnueabihf --build=i686-pc-linux-gnu --prefix=/usr/local/arm-linux-gnueabihf
./configure --host=i686-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/i686-w64-mingw32
./configure --host=x86_64-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/x86_64-w64-mingw32
each of these was followed by make, sudo make install
prior to calling configure for the arm cross build i had to do:
ln -s /usr/bin/arm-linux-gnueabihf-gcc-4.8 /usr/bin/arm-linux-gnueabihf-gcc
this was because the compiler had -4.8 tagged on the end so configure could not correctly 'guess' the name of the compiler
this issue did not apply to either the win32 or win64 mingw compilers
Note an additional gotcha was that when subsequently trying to link to these cross compiled CUnit libs, none of the cross compilers seemed to look in /usr/local/include by default so I had to manually add:
-I/usr/local/include
for each object file build
e.g. i added /usr/local/include to INCLUDE_DIRS in my Makefile
all this finally seems to have given me correctly cross built CUnit libs and I have successfully linked to them to produce cross built unit test binaries for each of the target platforms.
not at all easy and I would venture to call the configure option settings 'counter-intuitive' - as ever it is worth taking the time to read the relevant docs - this snippet was pertinent:
There are three system names that the build knows about: the machine
you are building on (build), the machine that you are building for
(host), and the machine that GCC will produce code for (target). When
you configure GCC, you specify these with --build=, --host=, and
--target=.
Specifying the host without specifying the build should be avoided, as
configure may (and once did) assume that the host you specify is also
the build, which may not be true.
If build, host, and target are all the same, this is called a native.
If build and host are the same but target is different, this is called
a cross. If build, host, and target are all different this is called a
canadian (for obscure reasons dealing with Canada's political party
and the background of the person working on the build at that time).
If host and target are the same, but build is different, you are using
a cross-compiler to build a native for a different system. Some people
call this a host-x-host, crossed native, or cross-built native.
and also:
When people configure a project like './configure', man often meets
these three confusing options, which are more related with
cross-compilation
--host: In which system the generated program will run.
--build: In which system the program will be built.
--target: this option is only used to build a cross-compiling
toolchain. When the tool chain generates executable program, in which target
system the program will run.
An example of tslib (a mouse driver library)
'./configure --host=arm-linux --build=i686-pc-linux-gnu': the
dynamically library is built on a x86 linux computer but will be used
for a embedded arm linux system.

CUDA samples run but no nvcc found - Mint 15 64 bit

I have downloaded and ran the CUDA 5.0 installer on my Mint 15 64bit distro. After hours of agony adjusting / removing / installing packages, it was able to finish installation - at least that what it said.
I can go run the CUDA samples so I thought hey it's working. However, I just made a new cu file and wanted to compile but it said "nvcc command not found"
I have looked at a topic similar to this here and they are talking about /opt/bin/ directory however on mine, there is no such directory. Does that mean it actually did not install ? It tells me to install nvidia cuda toolkit with apt-get but I am not sure if I should do that.
Also, I did say I ran the CUDA samples fine but I have to say ldconfig /usr/local/cuda/lib64
before I can get it to working. Is there a way to automate that ?
Thanks
You need to add the bin directory of the nvcc compiler driver to your PATH (environment variable), and you need to add the appropriate lib directories to your LD_LIBRARY_PATH environment variable.
For an immediate test, this should be as simple as:
export PATH=$PATH:/usr/local/cuda/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/lib
These instructions should be presented to you at the completion of a successful cuda toolkit install, but it seems your install method may have been roundabout.
To make this "automatic" you may want to investigate one of the methods to add these statements to a script run at login. For example, if you have a .bashrc file in your user's home directory, try editing that with the above commands. It should probably be sufficient to put the above commands at the very end of your ~/.bashrc file if you have one.
Note that Linux Mint is not one of the officially supported CUDA distros, so your mileage may vary.

Compiling C Source with Makefile in Windows

I'm trying to compile a downloaded program in Windows. The program is usually run in Linux, but is programmed to also run in Windows (the code has #if defined(_WIN32)'s in it, and claims to work with borland free tools). When I try to use make from the command line, it tells me "Incorrect command line argument: -C". In the makefile, there are many lines that say "make -C" followed by a directory name. Does this syntax not work in Windows? What is a correct way to do this? Is there any way to compile this for native use in Windows with this makefile?
Windows itself doesn't come with a make utility. Microsoft does have a 'make' utility that comes with their development tools (such as Visual Studio, the Platform SDK, or the Windows Driver Kit) but it's called nmake.
You probably need GNU make to process those makefiles. you can get a copy for Windows here:
http://unxutils.sourceforge.net/
However, if the makefile isn't written to be able to be run on Windows, it'll probably not work well. You'll also need to make sure you have whatever other development tools the makefile calls upon (maybe the Borland compiler or GCC), and there may be other configuration that needs to be done specific to the project you want to build. It's probably not a matter of just having the correct make utility.
-C is "change working directory" only for the gmake command (from the GNU package). You should take a look in the manual for your Make-Utility and see, wheather it supports something äquivalent.
Peter
Are you using cygwin?
Are there any instructions for installing on windows(perhaphs in a README file)?

Resources