Error compiling Kernel-aodv for ARM - c

I'm about to implement AODV on ARM board SabreLite and I'm facing some problems.
So, I use the latest version of AODV located here (sourceforge.net/projects/aodvuu/). I've follow the instruction given in README file but at the end, i get the error:
kaodv-mod.c:22:27: fatal error: linux/version.h: No such file or directory
#include
Since the board use 3.0.35 kernel version, i download it and I just change the kernel directory in Makefile. And, it should normally worked based on the instruction (http://w3.antd.nist.gov/wctg/aodv_kernel/kaodv_arm.html). The above error suggests that i don't have the version.h but I checked and I have all of linux header files installed, so it can't be that.
On the step number 6 of the tutorial (README file), i did not compile the kernel 3.0.35 because i'm pretty positive that it has the proper netfilter support for AODV-UU as it is a kernel young version. ( It is actually a configuration suggestion on kernel 2.4 and 2.6 but i think i should not obliged to do that here)
What can be the solution of this ?
Do i really need to compile this kernel version (3.0.35) before keep going ?
Do i have to change the AODV code, and if so, which files do i have to modify ?
Thanks in advance !!!
Thanks for your response, but unfortunately, i've already done that. By saying that, i mean, i've choosen the kernel source tree that matches the target kernel (linux-imx6-boundary-imx_3.0.35_4.1.0). I've also set up my cross compiler to have my environment variables ready for the cross compilation. Here is the output.
echo $CC:
arm-oe-linux-gnueabi-gcc -march=armv7-a -mthumb-interwork -mfloat-abi=hard -mfpu=neon -mtune=cortex-a9 --sysroot=/usr/local/oecore-x86_64/sysroots/cortexa9hf-vfp-neon-oe-linux-gnueabi
and some of my env variables looks like this:
ARCH=arm
CROSS_COMPILE=arm-oe-linux-gnueabi-
CFLAGS= -O2 -pipe -g -feliminate-unused-debug-types
RANLIB=arm-oe-linux-gnueabi-ranlib
After, all of these configurations, i still got the error. I really don't think that i have to recompile the kernel

In order to build modules, you need a kernel source tree in a state that matches the target kernel, i.e. not an untouched freshly-downloaded one. Don't confuse the presence of extra board-specific patches/drivers/etc. in a vendor kernel for configuration - to get the source tree into the right state to use, you still need to:
configure it correctly: make ARCH=arm <whatever>_defconfig (and/or any .config tweaks your board needs)
then build it: make ARCH=arm CROSS_COMPILE=<your toolchain triplet>
You need to actually build the kernel because there are many important files that don't exist yet, like the contents of include/generated (where the aforementioned version.h is created), the corresponding arch/$ARCH/include/generated, the checksums for module versioning, and probably more, which will all be different depending on which architecture and particular configuration options were chosen.
My bad for missing the mention of the crucial detail in the question, but upon downloading the linked AODV to try this myself, it became clear: the makefile is designed for the 2.4 build system which was rather different (and I'm not familiar with). Getting that one to build against a post-2.6 kernel will require writing a new makefile.

Related

gtk.h missing in Visual Studio for Linux Development

I'm currently trying to write an app for Raspberry Pi 3B under Rasbpian with aid of Linux Development plugin in Visual Studio 2017 Community. I managed to successfully deploy 'Blink' example, nobly attached by Microsoft folks, according to tutorial, and that went well. I even made some transmission over SPI thanks to wiringPi library. Then I would like to add some GUI to my app, so that one could, for example, make some transmission on click of a button on screen.
IntelliSense hinted me, that, in fact, there is gtk-3.0 library present in toolset. It seems that libraries are being copied from target device on every connection or so and I installed gtk on my Raspberry. So I added a simple line to this Blink example:
#include <gtk-3.0/gtk/gtk.h>
On compilation attempt, of course there was nearly 4k errors. Well, enough said, with a little hint from this old tutorial and a bit of trial and error, I managed to add this set of links under Debugging/Project properties/Configuration properties/VC++ directories/Header files directories:
Everything goes in promising direction, as errors number diminished from 4k to just one:
gtk-3.0\gtk\gtk.h: No such file or directory
No matter that this file is ACTUALLY in this location:
Regardless of combination of links in configuration above and using statement composition, compiler (?) can't find this damn file.
Please Halp
EDIT
I just confirmed, that it is indeed problem with target configuration. This is bad or good, depending on point of view. Good, because there is probably all good with VS setup. Bad, because I don't know a thing about compiling things under Linux.
On target (Raspberry Pi 3B) all ingredients for compilation are copied by Linux Development plugin. So in Terminal I executed line:
g++ main.cpp -o Blink2onRPi
and got
main.cpp:4:21: fatal error: gtk/gtk.h: no such file or directory
Now, I altered include line in main.cpp on target RPi, to this:
#include <gtk-3.0/gtk/gtk.h>
And now its missing <gdk/gdk.h>! When this change is made on host windows device - same result, but in VS.
As I dealt with similar problem in VS, upon setting links for IntelliSense (now apparently they're for this purpose), now probably similar dependencies have to be set somewhere on Raspbian. But where?
EDIT2
Upon execution of:
g++ main.cpp -o Blink2onRPi `pkg-config --cflags --libs gtk+-3.0`
on target RPi there is no more GTK-related errors, just wiringPi (also present in project) undefined references. It raises two possible questions:
1) How can I setup wiringPi on RPi so that the project could be manually compiled on target and
2) How/where add above line to Visual Studio, so it execute remotely with all GTK dependencies added properly on target
Researching stock present wiringPi library (as this is Blink led example for cross-compile Linux Development) I've found, that in Project Properties/Linker/Input/Library Dependencies there is mysterious entry:
wiringPi
Just that, nothing more. After removing this entry, on compilation pops out same errors as before on target (which apparently lacks proper wiringPi setup) - undefined references (not mensioned any missing headers). Can this be relevant for the case? If so, how could I add there such entry which would deal with missing GTK dependencies?
TL;DR
Use screenshot below to know where to add pkg-config calls in VS configuration so that it forwards it to the compiler and linker on the target.
Thanks to #zaguoba for providing these.
ORIGINAL ANSWER:
The list of directories to include is provided by pkg-config. For example pkg-config --cflags-only-I gtk+-3.0 will give you the list of include directories required. Those are the ones you need to add to the directories where VC++ wil look at include files. If you add the relative path you use in the #include, to one of those paths, you are able to find the file.
Example:
If you add to the directories C:\Program Files\foo\bar\gtk+-3.0
and have in your C file:
#include <gtk/gtk.h>
then the compiler will look for C:\Program Files\foo\bar\gtk+-3.0\gtk\gtk.h.
EDIT:
This all means the 'file not found' errors are because you're really building on the target and the target has no idea what C:\Program Files\... means. Those should be paths on the target filesystem, where the compiler is called. And this is exactly what pkg-config provides.
The copy of those files on the Windows machine filesystems is merely for Intellisense use, not for compiler use.
EDIT 2:
So that's that Visual Studio 2017 Community Linux Development plugin is what need to be undestood. It's not for cross compilation from Windows to Linux, istead it merely synchronizes code to the Windows host (for Intellisense use), but builds on the target. This means that all the paths and commands are Linux paths and commands, run on the target.
Here's the OP working configuration:
With that setup, you should
#include <gtk\gtk.h>
instead of
#include <gtk-3.0\gtk\gtk.h>
Alternatively remove all those VC++ directories/Header files directories, and just keep one of them that ends with include/ instead of listing up all the sub directiores.

GNU configure options for binutils, gcc & glib

I am trying to build an alternative compilation suite on my debian-testing machine (sorry, real question is actually at bottom).
Technically it is a "cross-compilation" because I need to use this toolchain on another machine, but hardware is compatible (x86_64-unknown-linux-gnu) so I don't need to bother about build/host/target differencies.
On the other hand I do need to worry about prefix/sysroot because I cannot install in any standard location (to be more precise: I could install anywhere, since I have root access there, but I shouldn't); This leaves me with my $HOME, some completely non-standard place (e.g.: /usr/local/my/toolchain) or some semi-standard (e.g.: /opt) place. In any case I will need to do something to enable compilation to find includes and libs in such places and runtime linker to find needed .so.
My requirements are:
I have a running Linux that shouln't be messed with.
This system does not have a "C" compiler.
Said linux is BusyBox-based, so I will need a substantial amount of utilities to do any serious compiling there, including make, sed, awk, ..., beside the compiler proper.
I would be happy to stuff my augmented toolchain in /opt, but that is not a requirement; any place is ok as long as it's accessible by more than a single user, I would like ot avoid installing in $HOME.
I am aware of "optware", I installed it and it does work... up to a point. Unfortunately:
It's really old software
it's only 32bit (my system is Linux syno0 3.2.40 #5004 SMP Thu Nov 6 15:26:44 CST 2014 x86_64 GNU/Linux).
Some programs won't compile because provided libs have 32/64 mismatch.
Real motivation to do all this exercise is I need to install some perl modules needed for one application that will have to run there and to install them from cpan I need a native compiler (and other stuff, of course).
Similar arguments about a Ruby-on-rails application I should port there.
If at all possible I should try to use the "native" libs in /lib:/lib64:/usr/lib:/usr/lib64:/usr/lib32 ("static" .a libs are not available).
I had a limited success preparing a custom tarball from an available toolchain for my processor, relocating it to /opt, stuffing needed apps in its sysroot and compiling with: CPPFLAGS="-I/opt/include" and LDFLAGS="-L/opt/lib -Wl,-rpath -Wl,/opt/lib".
This enables me to build almost everything "LFS-style", but it's rather error-prone and 64-bit-only.
I seem to understand it should possible to automatize all this by a careful mix of --prefix, --with-sysroot, --with-native-system-header-dir, --enable-multilib and their friends.
I tried to understand exactly how they should be used and failed, for a reason or another. I didn't find any exhaustive documentation and information in GCC instalation docs are confusing me.
Can someone, please, give me a recipe to build this toolchain?
Any pointer to in-depth documentation welcome, but I suspect some tutoring will be necessary.
I assume recompilation of Binutils and GCC is mandatory, Glib is probably not needed; anything else can be recompiled "native" on target.
TiA
ZioByte
After installing your toolchain in nonstandard places you need to set environment(maybe system-wide) correctly for GCC using LIBRARY_PATH and C_INCLUDE_PATHor CPLUS_INCLUDE_PATH.
Environment Variables Affecting GCC
I see three ways to automate setting path variables for your relocatable toolchain:
on every relocation adding your GCC path to your PATH environment variable. And create alias in your busybox profile (usually /etc/profile)
alias example:
alias gcc='TOOLCHAIN_PREFIX=$(which gcc | rev | cut -d"/" -f3-10 |rev); \
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ gcc'
creating for your toolchain launcher-script that will calculate pathes, but you'll should launch it with direct path, setting it when you launch build process, or of course you can add its location to PATH environment varaible.
script example
#!/bin/sh
TOOLCHAIN_PREFIX=$(echo $0 | rev | cut -d"/" -f3-10 |rev);
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ \
$TOOLCHAIN_PREFIX/bin/gcc-4.*
The most reliable and ergonomic way — create install/uninstall script that will unpack and set environment correctly, to relocate toolchain you will uninstall from it from one prefix and install to another. If you have dpkg on your debian-testing system, .deb package is best choice.
I can see no way to set environment fully automatically. But we can reduce it to setting just one path — path of toolchain.
HINT* For better stability you should isolate your toolchain and also install in your prefix Linux Kernel headers and Glib

Cross build third-party library locations on Linux

Ive been cross compiling my unit-tests to ensure they pass on all the platforms of interest, e.g. x86-linux, win32, win64, arm-linux
they unit tests require the CUnit library
So I've had to cross compile that also for each platform
That comes with its own autoconf stuff so you can easily cross-build it by specifying --host for configure
The question I have is where is the 'correct' place to have the CUnit libs installed for the various platforms? i.e. what should I set --prefix to for configure?
My initial guess was:
/usr/local/<platform>/lib/Cunit
i.e. setting --prefix /usr/local/<platform>
e.g. --prefix /usr/local/arm-linux-gnueabihf
which on sudo make install gives you:
/usr/local/arm-linux-gnueabihf/doc/CUnit
/usr/local/arm-linux-gnueabihf/include/CUnit
/usr/local/arm-linux-gnueabihf/lib
/usr/local/arm-linux-gnueabihf/share/CUnit
Obviously, if i don't specify a prefix for configure, each platform build overwrites the prev one which is no good
to then successfully link to these platform specific libs i need to specify the relevant lib dir for each target in its own LDFLAGS in the Makefile
Is this the right approach? Have I got the dir structure/location right for this sort of cross-build stuff? I assume there must be a defacto approach but not sure what it is..
possibly configure is supposed to handle all this stuff for me? maybe I just have to set --target correctly and perhaps --enable-multilib? all with --prefix=/usr/local?
some of the error msgs i get suggest /usr/lib/gcc-cross might be involve?
From reading more about cross compilation and the Gnu configure and build system it seems that I should just be setting the --target option for the configure step
but how do you know what the target names are? are they some fragment of the cross compiler names?
The 3 cross compilers I am using are:
arm-linux-gnueabihf-gcc-4.8
i686-w64-mingw32-gcc
x86_64-w64-mingw32-gcc
allowing me to cross-compile for ARM, win32 and win64
my host is 32 bit ubuntu, which I think might be --host i386-linux, but it seems that configure should get this right as its default
This is the procedure I finally figured out and got to work:
for each of my 3 cross-build tools (arm, win32, win64) my calls to configure looked like:
./configure --host=arm-linux-gnueabihf --build=i686-pc-linux-gnu --prefix=/usr/local/arm-linux-gnueabihf
./configure --host=i686-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/i686-w64-mingw32
./configure --host=x86_64-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/x86_64-w64-mingw32
each of these was followed by make, sudo make install
prior to calling configure for the arm cross build i had to do:
ln -s /usr/bin/arm-linux-gnueabihf-gcc-4.8 /usr/bin/arm-linux-gnueabihf-gcc
this was because the compiler had -4.8 tagged on the end so configure could not correctly 'guess' the name of the compiler
this issue did not apply to either the win32 or win64 mingw compilers
Note an additional gotcha was that when subsequently trying to link to these cross compiled CUnit libs, none of the cross compilers seemed to look in /usr/local/include by default so I had to manually add:
-I/usr/local/include
for each object file build
e.g. i added /usr/local/include to INCLUDE_DIRS in my Makefile
all this finally seems to have given me correctly cross built CUnit libs and I have successfully linked to them to produce cross built unit test binaries for each of the target platforms.
not at all easy and I would venture to call the configure option settings 'counter-intuitive' - as ever it is worth taking the time to read the relevant docs - this snippet was pertinent:
There are three system names that the build knows about: the machine
you are building on (build), the machine that you are building for
(host), and the machine that GCC will produce code for (target). When
you configure GCC, you specify these with --build=, --host=, and
--target=.
Specifying the host without specifying the build should be avoided, as
configure may (and once did) assume that the host you specify is also
the build, which may not be true.
If build, host, and target are all the same, this is called a native.
If build and host are the same but target is different, this is called
a cross. If build, host, and target are all different this is called a
canadian (for obscure reasons dealing with Canada's political party
and the background of the person working on the build at that time).
If host and target are the same, but build is different, you are using
a cross-compiler to build a native for a different system. Some people
call this a host-x-host, crossed native, or cross-built native.
and also:
When people configure a project like './configure', man often meets
these three confusing options, which are more related with
cross-compilation
--host: In which system the generated program will run.
--build: In which system the program will be built.
--target: this option is only used to build a cross-compiling
toolchain. When the tool chain generates executable program, in which target
system the program will run.
An example of tslib (a mouse driver library)
'./configure --host=arm-linux --build=i686-pc-linux-gnu': the
dynamically library is built on a x86 linux computer but will be used
for a embedded arm linux system.

How would one compile a program for the Coldfire toolchain?

I'm trying to compile a simple hello world application to be run on uCLinux (2.4) which is running on a board with a Freescale Coldfire (MCF5280C) processor...and I'm not quite sure what to do here.
I know I need to compile with the correct version/tools from Freescale to target this hardware, so I downloaded and installed the Coldfire tool chain and verified that one I have is for my target:
mike#linux-4puc:/usr/local/m68k-elf/bin> ./gcc -v
Reading specs from /usr/local/lib/gcc-lib/m68k-elf/2.95.3/specs
gcc version 2.95.3 20010315 (release)(ColdFire patches - 20010318 from http://fiddes.net/coldfire/)(uClinux XIP and shared lib patches from http://www.snapgear.com/)
I tried a simple gcc "file" type command:
mike#linux-4puc:/home/mike> /usr/local/m68k-elf/bin/gcc test.c
/usr/local/m68k-elf/bin/ld.real: cannot open crt0.o: No such file or directory
collect2: ld returned 1 exit status
Which does not work at all.. so it's clearly more complex that than. The output almost looks like it wants me to build the tool chain before I use it?? Anyone ever done this before? Not sure what I need to do or if I just need some flags.
You might also try seeing if you have a command called m68k-elf-gcc or something along those lines. This is a common naming for cross-compilers.
As for your problem, it sounds like there is something wrong with your compiler setup. crt0.o is the object file that contains C-runtime setup code. The linker (what is actually giving the error) should know where this file is if setup properly.
When you installed you should have run make install as the last step without having modified anything since the make step. The configuration step will setup certain variables and such based on the path where it's supposed to be installed.
Where did you get a FreeScale toolchain? I took a look at their site and it seemed only third parties supplied C++ cross-compilers. In the toolchain I get from NetBurner (for use with their hardware) the crt0.o file exists under the gcc-m68k\m68k-elf\lib directory.

Trouble cross compiling OpenCV for ARM9 Montavista Linux

I'm trying to cross-compile the OpenCV library for using it on an embedded system running Montavista Linux(the system has an ARM926 processor). I've managed to configure and generate the makefiles; the sources are built OK, including the 3rd party libraries. The trouble comes at link time. For some reason libtool picks some libraries from the host system (libjpeg, libtiff, libpng) and tries to link them against the ARM9 object files(which evidently is wrong). The error I get is
/usr/lib/libpng12.so: could not read symbols: File in wrong format.
I couldn't and I still can't figure out what exactly is wrong with my setup(I even tried to build the library directly on the ARM9 system but unfortunately it has a very small amount of RAM and gcc chokes). I also modified the LD_LIBRARY_PATH envvar to contain the target's system libraries and exported it before running configure and make.
Below is what I pass to configure:
LDFLAGS="-L/opt/Montavista/pro/devkit/arm/v5t_le/target/usr/lib" CFLAGS="-I/opt
/Montavista/pro/devkit/arm/v5t_le/target/usr/include -fsigned-char -march=armv5te
-mtune=arm926ej-s -ffast-math -fomit-frame-pointer -funroll-loops" CC=/opt/Montavista
/pro/devkit/arm/v5t_le/bin/arm_v5t_le-gcc CXXFLAGS="-fsigned-char -march=armv5te
-mtune=arm926ej-s -ffast-math -fomit-frame-pointer -funroll-loops" CXX=/opt/Montavista
/pro/devkit/arm/v5t_le/bin/arm_v5t_le-g++ ./configure --host=armv5tl-montavista-linux-
gnueabi --without-gtk --without-v4l --without-carbon --without-quicktime --without-
1394libs --without-ffmpeg --without-imageio --without-python --without-swig --enable-
static --enable-shared --disable-apps --prefix=/home/dev/Development/lib
I found this question on SO but unfortunately it does not provide a solution for me.
I'm using gcc version 4.2.0 (MontaVista 4.2.0-16.0.32.0801914 2008-08-30) on Montavista Linux for ARM(Leopard board powered by a TI DM365), OpenCV 2.0.0. My host system is Ubuntu 10.4.
Any pointers on how to tackle this issue would be of very much help.
Thanks
[UPDATE][SOLVED]: The autotools based method of generating the makefiles for OpenCV 2.0.0 seems to be broken when trying cross-compiling(or for some odd reason it did not work for me). I used the CMake GUI and specified a proper toolchain.cmake file and everything went smooth. See the answer below.
Procedure for cross-compiling OpenCV 2.0 for ARM using CMake GUI
Requirements
OpenCV 2.0 source tarball
CodeSourcery ARM cross-compiler v2009q1 or v2010.09(both tested)
Ubuntu 10.10/11.04 host machine
CMake >= v2.6 with CMake GUI
Steps
Unpack somewhere on your host machine the OpenCV tarball; cd to that location and create a build directory
Open the CMake GUI. Select:
Where is the source code:==path to the folder you unpacked the OpenCV tarball
Where to build the binaries:==path to the build folder you created in the first step
Add a new entry named COMPILER_ROOT as a path entry and set its value to the path of your cross compiler e.g. /opt/CodeSourcery/Sourcery_G++_Lite/bin
Set CMAKE_TOOLCHAIN_FILE to the path of your toolchain file on your host machine; example toolchain.cmake:
# this one is important
SET(CMAKE_SYSTEM_NAME Linux)
#this one not so much
SET(CMAKE_SYSTEM_VERSION 1)
# specify the cross compiler
set(COMPILER_ROOT /opt/CodeSourcery/Sourcery_G++_Lite/bin)
set(CMAKE_C_COMPILER ${COMPILER_ROOT}/arm-none-linux-gnueabi-gcc)
set(CMAKE_CXX_COMPILER ${COMPILER_ROOT}/arm-none-linux-gnueabi-g++)
# specify how to set the CMake compilation flags
# CPP
SET(CMAKE_CXX_FLAGS $ENV{CXX_FLAGS} CACHE FORCE "")
SET(CMAKE_CXX_FLAGS_DEBUG $ENV{CXX_FLAGS_DEBUG} CACHE FORCE "")
SET(CMAKE_CXX_FLAGS_RELEASE $ENV{CXX_FLAGS_RELEASE} CACHE FORCE "")
SET(CMAKE_CXX_FLAGS_RELWITHDEBINFO $ENV{CXX_FLAGS_RELWITHDEBINFO} CACHE FORCE "")
SET(CMAKE_CXX_LINK_FLAGS $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
SET(CMAKE_C_LINK_FLAGS $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
SET(CMAKE_CXX_LINK_FLAGS_RELEASE $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
SET(CMAKE_CXX_LINK_FLAGS_DEBUG $ENV{CMAKE_EXE_LINKER_FLAGS} CACHE FORCE "")
# C
#SET(CMAKE_C_FLAGS $ENV{C_FLAGS} CACHE FORCE "")
SET(CMAKE_C_FLAGS_DEBUG $ENV{C_FLAGS_DEBUG} CACHE FORCE "")
SET(CMAKE_C_FLAGS_RELEASE $ENV{C_FLAGS_RELEASE} CACHE FORCE "")
SET(CMAKE_C_FLAGS_RELWITHDEBINFO $ENV{C_FLAGS_RELWITHDEBINFO} CACHE FORCE "")
# where is the target environment
SET(CMAKE_FIND_ROOT_PATH ${COMPILER_ROOT})
# search for programs in the build host directories
SET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
# for libraries and headers in the target directories
SET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
Tweak other settings to your needs e.g.EXECUTABLE_OUTPUT_PATH LIBRARY_OUTPUT_PATH CMAKE_BUILD_TYPE CMAKE_CFLAGS_DEBUG CMAKE_CFLAGS_RELEASE, third party libraries you want to build with etc.
Press Configure then Generate; check for eventual errors(everything should run smoothly but you never know)
If everything went OK in the generation phase then cd to the build folder, type make then sit back and relax until the build process is done
It seems you are using an old version of OpenCv since it still uses the .configure mechanism. This is good in a sense, because CMake is not known to be cross-compilation friendly.
LDFLAGS="-L/opt/Montavista/pro/devkit/arm/v5t_le/target/usr/lib"
This is were the linker will look for libraries. It should be enough. Are you sure the libraries needed by OpenCV are in this PATH ?
A first Hack would be to rename the libraries in /usr/lib, so that the linker don't find them, and see if it find the target libraries. This is ugly, maybe more than ugly. Don't do it. Yet.
A second solution is to do native compilation. But it an emulated ARM box, not on real, slow and memory poor hardware. I have no experience either with this kind of cross-compile method, but here is a link to get you started.
EDIT
Wait !!, Which version of OpenCV are you using ? I thought OpenCV was not using .configure et al. ? There is probably a more elegant solution using .configure flags. Or maybe non optionnal libraries are somehow hardcoded.
Interestingly I'm currently trying to get version 2.1.0 to build for ARM. It relies on cmake which is a real pain to try to get ready for cross-compiling. There's no way to specify what toolchain to use, I have to spot the variable names for all binutils, hoping not to forget any. There are still a bunch of magically defined variables that prevent it from building, I'm giving up right now. I'm still seeing some -march=i686 magically appended and some libs referenced from my build system. What a mess !
Maybe when I have time I'll try to downgrade to an older version making use of more standard tools, but cmake clearly complicates the situation here.

Resources