Advice on building a cross-compiler for Xscale ARM? - arm

I am playing around with a PXA270 Xscale development board (similar to the Gumstix), and was provided a cross compiler, but it is GCC 3.3.3. I would like to learn how to build my own cross compiler, so I can customize the setup, but have had trouble getting crosstools and crosstools-ng to successfully build a toolchain. My main needs are using GCC 4.2.X and the ability to use soft float. I am running Ubuntu 9. Does anyone have any recommendations or advice on building a toolchain for such a system?
Thanks in advance,
Ben

www.gnuarm.com has instructions for building your own ARM cross compiler as well as binaries available for download. They don't have GCC 4.2.x there, but I've built it using steps pretty similar to those instructions without too many problems.
Why do you want software floating point? It's going to be really slow; most applications only really need to use a fixed point implementation (read: integers).

Short answer, very difficult. Longer answer, keep trying, you may stumble upon it but likely not. Xscale with hard float is more likely and just dont use any floating point. I know I tried many combinations and failed. There is a reason why the combination you are looking for normally uses the older gcc, the last one to work. You might look at codesourcery to see what they have, using there tools or learning what they are up to is likely your best bet.

I used Dan Kegel's crosstool for creating my arm cross toolchain. It took a few tries, but I was eventually able to get it right.
I recommend reviewing the matrix of build results for various architectures to help determine a suitable combination of gcc, glibc, binutils, and linux kernel headers.
The following is the script that I used to create my arm cross toolchain. I realize that my requirements are a bit different that yours, but you may be able to modify it to suit your needs.
#!/bin/sh
set -ex
# Extract crosstool
tar zxf crosstool-0.43.tar.gz
ln -sf crosstool-0.43 crosstool
# Create .dat file for toolchain
cat << EOF > $HOME/arm-cross.dat
BINUTILS_DIR=binutils-2.15
GCC_DIR=gcc-3.4.5
GCC_EXTRA_CONFIG=--with-float=soft
GCC_LANGUAGES=c,c++
GLIBC_ADDON_OPTIONS==linuxthreads,
GLIBC_DIR=glibc-2.3.6
GLIBC_EXTRA_CONFIG=--without-fp
GDB_DIR=gdb-6.5
KERNELCONFIG="\$HOME/crosstool/arm.config"
LINUX_DIR=linux-2.6.12.6
LINUX_SANITIZED_HEADER_DIR=
SHARED_MODE=--enable-shared
TARGET=arm-softfloat-linux-gnu
TARGET_CFLAGS=-O
BUILD_DIR="\$HOME/crosstool/build/\$TARGET/\$GCC_DIR-\$GLIBC_DIR"
PREFIX="/usr/crossgnu/\$GCC_DIR-\$GLIBC_DIR/\$TARGET"
SRC_DIR="\$HOME/crosstool/build/\$TARGET/\$GCC_DIR-\$GLIBC_DIR"
TARBALLS_DIR="\$HOME/downloads"
TOP_DIR="\$HOME/crosstool"
EOF
# Create toolchain directory
sudo mkdir -p /usr/crossgnu
sudo chown $USER /usr/crossgnu
# Build toolchain
pushd crosstool
eval `cat $HOME/arm-cross.dat` sh all.sh --gdb --notest
popd
Note: I had the crosstool-0.43.tar.gz tarball in the same directory I ran the script from.

If you can't manage to build a crosscompiler using crosstool*, you're unlikely be able to do so without them. It is not straightforward!
However, you can most easily get recent cross-compilers onto Ubuntu by editing /etc/apt/sources.list to include
deb http://www.emdebian.org/debian/ lenny main
then saying
apt-get update
apt-get install g{cc,++}-4.3-arm-linux-gnueabi

I too used crosstool, and was able to build an arm-xscale-linux-gcc under Cygwin. The instructions are here: http://sourceforge.net/apps/mediawiki/imote2-linux/index.php?title=ToolsGccArm

Related

How to install packages or natively compile package on a minimal Linux kernel compiled with Buildroot

I enconter a problem I hope to be solved quickly.
I have grace to BUILDROOT compile a Linux zImage Kernel, build a rootfs, have a bootloader, so everething is right.
But at the boot, the system is so minimal that i don't have got some package manager like apt-get, yum, etc....
Despite the fact i've got the network (wget is possible), I don't know how to have a simply gcc in my host (Buildroot don't permit to compile gcc anymore)
or more simply a package manager.
It is so boring to cross compile everything that I think the better solution is to apt-get packages, resolve dependencies and install it for an ARM architecture..
I recompile with Package manager options including IPKG and OPKG but the repositories don't work and the commands return nothing (I.E. ipkg --list, etc..)
Did someone had the same problems and what is the best way to have a good Package Manager on a minimal system compiled and build grace to Buildroot.
What is the best way to have anymore a Native compile toolchain on the ARM host ?
Thanks to your answers
My purpose is to natively compile my code including -lm -lpthread and LIRC module and header files on this minimal host system.
Stefan, France
---- additional informations ---
hello,
i refresh this tag for set informations :
recently buildroot does no longer permit natively compile gcc package
even if make and other tools are available on the recent buildroot distribution
gcc and other tags are marked as deprecated
so it is needed / obligated to cross compile on the host
so what i did
for convenience, i've the Makefile for my code with dependencies
for pthread and lirc_client
If anybody interested, ask me,
stef, France
The Buildroot reference documentation has plenty of details about these questions. Please see http://buildroot.org/downloads/manual/manual.html#faq-no-compiler-on-target and http://buildroot.org/downloads/manual/manual.html#faq-no-binary-packages.
I've added dpkg inside buildroot in order to have dpkg on my board... it was really easy to do.
The only thing I had to hack is to create a fake ldconfig.

GNU configure options for binutils, gcc & glib

I am trying to build an alternative compilation suite on my debian-testing machine (sorry, real question is actually at bottom).
Technically it is a "cross-compilation" because I need to use this toolchain on another machine, but hardware is compatible (x86_64-unknown-linux-gnu) so I don't need to bother about build/host/target differencies.
On the other hand I do need to worry about prefix/sysroot because I cannot install in any standard location (to be more precise: I could install anywhere, since I have root access there, but I shouldn't); This leaves me with my $HOME, some completely non-standard place (e.g.: /usr/local/my/toolchain) or some semi-standard (e.g.: /opt) place. In any case I will need to do something to enable compilation to find includes and libs in such places and runtime linker to find needed .so.
My requirements are:
I have a running Linux that shouln't be messed with.
This system does not have a "C" compiler.
Said linux is BusyBox-based, so I will need a substantial amount of utilities to do any serious compiling there, including make, sed, awk, ..., beside the compiler proper.
I would be happy to stuff my augmented toolchain in /opt, but that is not a requirement; any place is ok as long as it's accessible by more than a single user, I would like ot avoid installing in $HOME.
I am aware of "optware", I installed it and it does work... up to a point. Unfortunately:
It's really old software
it's only 32bit (my system is Linux syno0 3.2.40 #5004 SMP Thu Nov 6 15:26:44 CST 2014 x86_64 GNU/Linux).
Some programs won't compile because provided libs have 32/64 mismatch.
Real motivation to do all this exercise is I need to install some perl modules needed for one application that will have to run there and to install them from cpan I need a native compiler (and other stuff, of course).
Similar arguments about a Ruby-on-rails application I should port there.
If at all possible I should try to use the "native" libs in /lib:/lib64:/usr/lib:/usr/lib64:/usr/lib32 ("static" .a libs are not available).
I had a limited success preparing a custom tarball from an available toolchain for my processor, relocating it to /opt, stuffing needed apps in its sysroot and compiling with: CPPFLAGS="-I/opt/include" and LDFLAGS="-L/opt/lib -Wl,-rpath -Wl,/opt/lib".
This enables me to build almost everything "LFS-style", but it's rather error-prone and 64-bit-only.
I seem to understand it should possible to automatize all this by a careful mix of --prefix, --with-sysroot, --with-native-system-header-dir, --enable-multilib and their friends.
I tried to understand exactly how they should be used and failed, for a reason or another. I didn't find any exhaustive documentation and information in GCC instalation docs are confusing me.
Can someone, please, give me a recipe to build this toolchain?
Any pointer to in-depth documentation welcome, but I suspect some tutoring will be necessary.
I assume recompilation of Binutils and GCC is mandatory, Glib is probably not needed; anything else can be recompiled "native" on target.
TiA
ZioByte
After installing your toolchain in nonstandard places you need to set environment(maybe system-wide) correctly for GCC using LIBRARY_PATH and C_INCLUDE_PATHor CPLUS_INCLUDE_PATH.
Environment Variables Affecting GCC
I see three ways to automate setting path variables for your relocatable toolchain:
on every relocation adding your GCC path to your PATH environment variable. And create alias in your busybox profile (usually /etc/profile)
alias example:
alias gcc='TOOLCHAIN_PREFIX=$(which gcc | rev | cut -d"/" -f3-10 |rev); \
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ gcc'
creating for your toolchain launcher-script that will calculate pathes, but you'll should launch it with direct path, setting it when you launch build process, or of course you can add its location to PATH environment varaible.
script example
#!/bin/sh
TOOLCHAIN_PREFIX=$(echo $0 | rev | cut -d"/" -f3-10 |rev);
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ \
$TOOLCHAIN_PREFIX/bin/gcc-4.*
The most reliable and ergonomic way — create install/uninstall script that will unpack and set environment correctly, to relocate toolchain you will uninstall from it from one prefix and install to another. If you have dpkg on your debian-testing system, .deb package is best choice.
I can see no way to set environment fully automatically. But we can reduce it to setting just one path — path of toolchain.
HINT* For better stability you should isolate your toolchain and also install in your prefix Linux Kernel headers and Glib

Cross build third-party library locations on Linux

Ive been cross compiling my unit-tests to ensure they pass on all the platforms of interest, e.g. x86-linux, win32, win64, arm-linux
they unit tests require the CUnit library
So I've had to cross compile that also for each platform
That comes with its own autoconf stuff so you can easily cross-build it by specifying --host for configure
The question I have is where is the 'correct' place to have the CUnit libs installed for the various platforms? i.e. what should I set --prefix to for configure?
My initial guess was:
/usr/local/<platform>/lib/Cunit
i.e. setting --prefix /usr/local/<platform>
e.g. --prefix /usr/local/arm-linux-gnueabihf
which on sudo make install gives you:
/usr/local/arm-linux-gnueabihf/doc/CUnit
/usr/local/arm-linux-gnueabihf/include/CUnit
/usr/local/arm-linux-gnueabihf/lib
/usr/local/arm-linux-gnueabihf/share/CUnit
Obviously, if i don't specify a prefix for configure, each platform build overwrites the prev one which is no good
to then successfully link to these platform specific libs i need to specify the relevant lib dir for each target in its own LDFLAGS in the Makefile
Is this the right approach? Have I got the dir structure/location right for this sort of cross-build stuff? I assume there must be a defacto approach but not sure what it is..
possibly configure is supposed to handle all this stuff for me? maybe I just have to set --target correctly and perhaps --enable-multilib? all with --prefix=/usr/local?
some of the error msgs i get suggest /usr/lib/gcc-cross might be involve?
From reading more about cross compilation and the Gnu configure and build system it seems that I should just be setting the --target option for the configure step
but how do you know what the target names are? are they some fragment of the cross compiler names?
The 3 cross compilers I am using are:
arm-linux-gnueabihf-gcc-4.8
i686-w64-mingw32-gcc
x86_64-w64-mingw32-gcc
allowing me to cross-compile for ARM, win32 and win64
my host is 32 bit ubuntu, which I think might be --host i386-linux, but it seems that configure should get this right as its default
This is the procedure I finally figured out and got to work:
for each of my 3 cross-build tools (arm, win32, win64) my calls to configure looked like:
./configure --host=arm-linux-gnueabihf --build=i686-pc-linux-gnu --prefix=/usr/local/arm-linux-gnueabihf
./configure --host=i686-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/i686-w64-mingw32
./configure --host=x86_64-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/x86_64-w64-mingw32
each of these was followed by make, sudo make install
prior to calling configure for the arm cross build i had to do:
ln -s /usr/bin/arm-linux-gnueabihf-gcc-4.8 /usr/bin/arm-linux-gnueabihf-gcc
this was because the compiler had -4.8 tagged on the end so configure could not correctly 'guess' the name of the compiler
this issue did not apply to either the win32 or win64 mingw compilers
Note an additional gotcha was that when subsequently trying to link to these cross compiled CUnit libs, none of the cross compilers seemed to look in /usr/local/include by default so I had to manually add:
-I/usr/local/include
for each object file build
e.g. i added /usr/local/include to INCLUDE_DIRS in my Makefile
all this finally seems to have given me correctly cross built CUnit libs and I have successfully linked to them to produce cross built unit test binaries for each of the target platforms.
not at all easy and I would venture to call the configure option settings 'counter-intuitive' - as ever it is worth taking the time to read the relevant docs - this snippet was pertinent:
There are three system names that the build knows about: the machine
you are building on (build), the machine that you are building for
(host), and the machine that GCC will produce code for (target). When
you configure GCC, you specify these with --build=, --host=, and
--target=.
Specifying the host without specifying the build should be avoided, as
configure may (and once did) assume that the host you specify is also
the build, which may not be true.
If build, host, and target are all the same, this is called a native.
If build and host are the same but target is different, this is called
a cross. If build, host, and target are all different this is called a
canadian (for obscure reasons dealing with Canada's political party
and the background of the person working on the build at that time).
If host and target are the same, but build is different, you are using
a cross-compiler to build a native for a different system. Some people
call this a host-x-host, crossed native, or cross-built native.
and also:
When people configure a project like './configure', man often meets
these three confusing options, which are more related with
cross-compilation
--host: In which system the generated program will run.
--build: In which system the program will be built.
--target: this option is only used to build a cross-compiling
toolchain. When the tool chain generates executable program, in which target
system the program will run.
An example of tslib (a mouse driver library)
'./configure --host=arm-linux --build=i686-pc-linux-gnu': the
dynamically library is built on a x86 linux computer but will be used
for a embedded arm linux system.

grep or find on Android

These are not installed on Android 4.2.1 by default, so is it possible to cross-compile the source for e.g. GNU grep or find and have it run on Android? ( Preferably without having to root the device or installing some app off PLAY e.g. busybox.) Are there any missing dependencies that will prevent this? I am developing on Ubuntu 10.0.04
Strange. I have them on /system/xbin/*. Maybe more luck with busybox. busybox find busybox grep Not sure if busybox is installed by default on Android 4.2 tho, but it's a pretty common binary.
This is not a complete answer because I haven't tried building grep or find. However, in general it is quite possible to build GNU utilities for Android. To do this, the best option is:
Download the Android native development kit
Build an Android standalone toolchain by referring to docs/STANDALONE-TOOLCHAIN.html in the NDK
Simply build the relevant GNU utility using the normal ./configure && make mechanism.
You'll then need to copy the resulting binaries onto your Android device, which you can do using adb push. You may need to arrange to put them into /data/ somewhere because /mnt/sdcard is often marked non-executable.
Missing dependencies
The main problem you'll find during the actual builds is that Android does not use the standard GNU libc (glibc). Instead, it uses its own, called Bionic. This does miss certain important APIs - for example, wide character string support.
I've found for some GNU utilities this is OK and they can be compiled with minimal source code changes.
However, if you run into trouble, you're probably better off using other versions of these utilities which are typically designed for more flexibility in terms of the underlying libc. Specifically, the previous advice about using busybox is excellent. If you don't wish to install it from the Android market, you can find the source code here.

Trouble cross compiling OpenCV for ARM9 Montavista Linux

I'm trying to cross-compile the OpenCV library for using it on an embedded system running Montavista Linux(the system has an ARM926 processor). I've managed to configure and generate the makefiles; the sources are built OK, including the 3rd party libraries. The trouble comes at link time. For some reason libtool picks some libraries from the host system (libjpeg, libtiff, libpng) and tries to link them against the ARM9 object files(which evidently is wrong). The error I get is
/usr/lib/libpng12.so: could not read symbols: File in wrong format.
I couldn't and I still can't figure out what exactly is wrong with my setup(I even tried to build the library directly on the ARM9 system but unfortunately it has a very small amount of RAM and gcc chokes). I also modified the LD_LIBRARY_PATH envvar to contain the target's system libraries and exported it before running configure and make.
Below is what I pass to configure:
LDFLAGS="-L/opt/Montavista/pro/devkit/arm/v5t_le/target/usr/lib" CFLAGS="-I/opt
/Montavista/pro/devkit/arm/v5t_le/target/usr/include -fsigned-char -march=armv5te
-mtune=arm926ej-s -ffast-math -fomit-frame-pointer -funroll-loops" CC=/opt/Montavista
/pro/devkit/arm/v5t_le/bin/arm_v5t_le-gcc CXXFLAGS="-fsigned-char -march=armv5te
-mtune=arm926ej-s -ffast-math -fomit-frame-pointer -funroll-loops" CXX=/opt/Montavista
/pro/devkit/arm/v5t_le/bin/arm_v5t_le-g++ ./configure --host=armv5tl-montavista-linux-
gnueabi --without-gtk --without-v4l --without-carbon --without-quicktime --without-
1394libs --without-ffmpeg --without-imageio --without-python --without-swig --enable-
static --enable-shared --disable-apps --prefix=/home/dev/Development/lib
I found this question on SO but unfortunately it does not provide a solution for me.
I'm using gcc version 4.2.0 (MontaVista 4.2.0-16.0.32.0801914 2008-08-30) on Montavista Linux for ARM(Leopard board powered by a TI DM365), OpenCV 2.0.0. My host system is Ubuntu 10.4.
Any pointers on how to tackle this issue would be of very much help.
Thanks
[UPDATE][SOLVED]: The autotools based method of generating the makefiles for OpenCV 2.0.0 seems to be broken when trying cross-compiling(or for some odd reason it did not work for me). I used the CMake GUI and specified a proper toolchain.cmake file and everything went smooth. See the answer below.
Procedure for cross-compiling OpenCV 2.0 for ARM using CMake GUI
Requirements
OpenCV 2.0 source tarball
CodeSourcery ARM cross-compiler v2009q1 or v2010.09(both tested)
Ubuntu 10.10/11.04 host machine
CMake >= v2.6 with CMake GUI
Steps
Unpack somewhere on your host machine the OpenCV tarball; cd to that location and create a build directory
Open the CMake GUI. Select:
Where is the source code:==path to the folder you unpacked the OpenCV tarball
Where to build the binaries:==path to the build folder you created in the first step
Add a new entry named COMPILER_ROOT as a path entry and set its value to the path of your cross compiler e.g. /opt/CodeSourcery/Sourcery_G++_Lite/bin
Set CMAKE_TOOLCHAIN_FILE to the path of your toolchain file on your host machine; example toolchain.cmake:
# this one is important
SET(CMAKE_SYSTEM_NAME Linux)
#this one not so much
SET(CMAKE_SYSTEM_VERSION 1)
# specify the cross compiler
set(COMPILER_ROOT /opt/CodeSourcery/Sourcery_G++_Lite/bin)
set(CMAKE_C_COMPILER ${COMPILER_ROOT}/arm-none-linux-gnueabi-gcc)
set(CMAKE_CXX_COMPILER ${COMPILER_ROOT}/arm-none-linux-gnueabi-g++)
# specify how to set the CMake compilation flags
# CPP
SET(CMAKE_CXX_FLAGS $ENV{CXX_FLAGS} CACHE FORCE "")
SET(CMAKE_CXX_FLAGS_DEBUG $ENV{CXX_FLAGS_DEBUG} CACHE FORCE "")
SET(CMAKE_CXX_FLAGS_RELEASE $ENV{CXX_FLAGS_RELEASE} CACHE FORCE "")
SET(CMAKE_CXX_FLAGS_RELWITHDEBINFO $ENV{CXX_FLAGS_RELWITHDEBINFO} CACHE FORCE "")
SET(CMAKE_CXX_LINK_FLAGS $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
SET(CMAKE_C_LINK_FLAGS $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
SET(CMAKE_CXX_LINK_FLAGS_RELEASE $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
SET(CMAKE_CXX_LINK_FLAGS_DEBUG $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
# C
#SET(CMAKE_C_FLAGS $ENV{C_FLAGS} CACHE FORCE "")
SET(CMAKE_C_FLAGS_DEBUG $ENV{C_FLAGS_DEBUG} CACHE FORCE "")
SET(CMAKE_C_FLAGS_RELEASE $ENV{C_FLAGS_RELEASE} CACHE FORCE "")
SET(CMAKE_C_FLAGS_RELWITHDEBINFO $ENV{C_FLAGS_RELWITHDEBINFO} CACHE FORCE "")
# where is the target environment
SET(CMAKE_FIND_ROOT_PATH ${COMPILER_ROOT})
# search for programs in the build host directories
SET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
# for libraries and headers in the target directories
SET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
Tweak other settings to your needs e.g.EXECUTABLE_OUTPUT_PATH LIBRARY_OUTPUT_PATH CMAKE_BUILD_TYPE CMAKE_CFLAGS_DEBUG CMAKE_CFLAGS_RELEASE, third party libraries you want to build with etc.
Press Configure then Generate; check for eventual errors(everything should run smoothly but you never know)
If everything went OK in the generation phase then cd to the build folder, type make then sit back and relax until the build process is done
It seems you are using an old version of OpenCv since it still uses the .configure mechanism. This is good in a sense, because CMake is not known to be cross-compilation friendly.
LDFLAGS="-L/opt/Montavista/pro/devkit/arm/v5t_le/target/usr/lib"
This is were the linker will look for libraries. It should be enough. Are you sure the libraries needed by OpenCV are in this PATH ?
A first Hack would be to rename the libraries in /usr/lib, so that the linker don't find them, and see if it find the target libraries. This is ugly, maybe more than ugly. Don't do it. Yet.
A second solution is to do native compilation. But it an emulated ARM box, not on real, slow and memory poor hardware. I have no experience either with this kind of cross-compile method, but here is a link to get you started.
EDIT
Wait !!, Which version of OpenCV are you using ? I thought OpenCV was not using .configure et al. ? There is probably a more elegant solution using .configure flags. Or maybe non optionnal libraries are somehow hardcoded.
Interestingly I'm currently trying to get version 2.1.0 to build for ARM. It relies on cmake which is a real pain to try to get ready for cross-compiling. There's no way to specify what toolchain to use, I have to spot the variable names for all binutils, hoping not to forget any. There are still a bunch of magically defined variables that prevent it from building, I'm giving up right now. I'm still seeing some -march=i686 magically appended and some libs referenced from my build system. What a mess !
Maybe when I have time I'll try to downgrade to an older version making use of more standard tools, but cmake clearly complicates the situation here.

Resources