Cross compiling for Raspberry pi 2 error - c

I wanted to start cross compiling for raspberry pi2 on Ubuntu 32bit (in virtual box), so I downloaded the toolchain on the github site (https://github.com/raspberrypi/tools) and tried to compile a simple hello world program with the command (I've included the path to the bin folder that contains arm-linux-gnueabihf-gcc-4.8.3 to the PATH variable.):
arm-linux-gnueabihf-gcc-4.8.3 HelloWorld.c
However, I always get the following error message:
path/to/the/linker/in/the/toolchain/ld:/path/to/the/libc.so.6file/in/the/toolchain/libc.so.6: file format not recognized; treating as linker script
and subsequently a syntax error.
When I look into libc.so.6, I see a single line containing:
libc-2.13.so
The libc-2.13.so file is present in the same folder as the libc.so.6 file. When I invoke
file libc-2.13.so
I get:
libc-2.13.so: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), BuildID[sha1]=dbd0cdca5a677bea1417be1272f4c5ef43bd3e22, for GNU/Linux 2.6.26, stripped
I don't know what could cause this error since obviously the linker from the toolchain and the libc.so.6 file from the toolchain are processed so the file format should be recognized, right?
Can someone point me in the right direction here? Thanks!

I will suggest you alternate way to do Cross compilation. I tried it and it works. You can use crosstool-NG. It gives you graphical way to setup your toolchain for cross compilation. There are lot of option for setting up toolchain. You can explore that.
Now you are doing for ARM-RPi but tomorrow if your Target CPU changed then it will be very easy to reconfigure the toolchain again.
You can find easy steps given in this article. I hope this works for you.

When I look into libc.so.6, I see a single line containing:
libc-2.13.so
I just ran into this.
The problem is way simpler than you think. When you un-gz'd and untar'd the toolchain, what happened is that libc.so.6 became a text file. It is supposed to be a "symbolic link" file pointing at the correct file "libc-2.13.so".
If you are using windows and 7-Zip, make sure to click "Run AS Administrator" when you start 7-zip. If you simply drag and drop, the error is not so obvious.

In my first effort, I had to include the path to gcc in the command. Then I just compiled programs on the RPi.
~/toolchain/raspbian-toolchain-gcc-4.7.2-linux32/bin/arm-linux-gnueabihf-gcc whets.c

Related

Run valgrind on cross compiled executable

I'm using Ubuntu 18.04 VM and trying to find a way to valgrind check an arm-Linux executable. I've tried compiling with local gcc but ran into some problems. The executable is created by Makefile provided from project. I've tried linaro emulator, following guides online, but faced multiple issues which for each one I've searched on online for solutions but all failed. What are the ways I can valgrind?
As long as I can check program for memory leak, any way is fine.
What I get when I valgrind executable now:
valgrind: failed to start tool 'memcheck' for platform 'arm-linux': No such file or directory
The file it self is fyi:
nrf52832_xxaa.out: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, with debug_info, not stripped
I've searched through multiple posts for solutions but couldn't find any.
Cross compile valgrind, and execute on the target. There are no other ways. Can't even use qemu to execute valgrind.
It is mandatory to run the executable on the device.
Please consider the option to download the precompiled package for your arch example from https://packages.debian.org/search?keywords=valgrind, follow the mandatory dependencies, and install all on you embedded device. I use to base the version according to the installed version of libc.

qemu-arm running compiled binary

Trying to run a compiled binary I've extracted from a firmware on qemu, however I encounter this error:
root#ubuntu14:~# qemu-arm -L /usr/arm-linux-gnueabi ~/x
/system/bin/linker: No such file or directory
root#ubuntu14:~# file ./x
./x: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), stripped
I'm using the "-L" flag, as suggested in:
qemu-arm can't run arm compiled binary
However, this flag doesn't seem to make a different for me, neither does setting QEMU_LD_PREFIX
Could it be some missing dependencies?
It looks like the system is not able to find the dynamic linker (which in your case appears to be /system/bin/linker, rather than the the normal /lib/ld-linux-armhf.so.3 or similar.
Since I don't have access to your code, I've tried to reproduce this by mounting a Raspberry Pi "Raspbian" image on /mnt on my system. If I try to run /mnt/bin/echo hello, like this:
qemu-arm /mnt/bin/echo hello
I get a similar error:
/lib/ld-linux-armhf.so.3: No such file or directory
I can provide an explicit path to the dynamic linker like this:
qemu-arm /mnt/lib/ld-linux-armhf.so.3 /mnt/bin/echo hello
Now I get a different error:
/mnt/bin/echo: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
That's actually great, because that is a normal "I can't find my shared libraries" error, and the solution is to use LD_LIBRARY_PATH. Rather than setting this in our environment, we can set this in the environment created by qemu-arm with the -E flag:
qemu-arm -E LD_LIBRARY_PATH=/mnt/lib/arm-linux-gnueabihf/ /mnt/lib/ld-linux-armhf.so.3 /mnt/bin/echo hello
Which gets me the output:
hello
I suspect that these same two techniques -- providing an explicit path to the linker, and providing an explicit library search path in LD_LIBRARY_PATH -- may help you out. Let me know how it works!
/system/bin/linker is the Android dynamic linker, so you need a directory with the Android dynamic linker and dynamic libraries, not one for Linux (which is what /usr/arm-linux-gnueabi will be). You should be able to pull the relevant files out of your firmware image, I expect.

Buildroot ARM Toolchain for arm7tdmi to compile SourceForge Archopen

I'm interested in compiling the sourceforge project https://svn.code.sf.net/p/archopen/code/ArchOpen/trunk/, and more especifically the app AOnes, which is a NES emulator for Archos Gmini 400 (Inactive old project)
Analyzing the source code, I saw that the Gmini400 is an arm7tdmi device, no MMU and the toolchain used to compile was a buildroot one named arm-linux-nofpu.
I supposed (according to the buildroot-2009-02 menuconfig) that no-fpu means soft floating point, so i tried to build such a toolchain.
I build a toolchain with buildroot-2013-02 (both year 2009 and 2010 don't work for me) with the following options:
arm7tdmi
no MMU
Software Floating Point
Enable elf2flt support (i saw there were such a reference in the
Makefile of ArchOpen)
I let the other options as they were and made the build.
I made a checkout of ArchOpen, launch the configuration script to choose Gmini4XX as the target (and not Gmini 402 chich is quite different), selected to defaut.rules and edit the resulting Makefile.conf to adapt the tools paths and names (as my generated toolchain name is different)
First error:
[thread.o]
{standard input}: Assembler messages:
{standard input}:1236: Error: Rn must not overlap other operand -- swpb r0,r3,[r0]
Well, this code is supposed to be working, but i opened thread.h and corrected the source to pass through (adding a "&")
Second error:
undefined reference to __aeabi_idivmod and undefined reference to __aeabi_ldivmod
As google says, it seems to be a -lgcc missing problem.
I edited the wav folder makefile to add -lgcc and specified -L/lib_folder_of_my_toolchain_containing_libgcc.a
Third error:
in gcc/config/arm/lib1funcs.asm : multiple definition of __divsi3
in gcc/config/arm/lib1funcs.asm : undefined reference to raise
in libgcc.a (some .o inside) : undefined reference to __aeabi_unwind_cpp_pr0
I've no idea to solve this...
Does anyone have an idea? Does anyone can help me to get a working arm7 toolchain compatible with this archopen code?
Thanks!
Well, in this particular case, back to 2005 was a good solution...
With a ubuntu 5.04, buildroot has been built with the defaut generic ARM (little endian) configuration, except for the following options:
GCC 3.3.5
No use the daily uClibc snapshot
The processor has no MMU
No support large file
Use softfloat by default
No install busybox (as I only wanted the toolchain)
No create an Ext2 filesystem (same reason than above)
The build fail just after having compiled the last GCC phase. At this point, add the buildroot/build_arm_nofpu/staging_dir/bin in the PATH env. variable, download the libfloat source (libfloat-990616.orig.tar.bz2) tarball, edit the Makefile changing gcc, ld and as repectively by arm-linux-uclibc-gcc, arm-linux-uclibc-ld and arm-linux-uclibc-as and build libfloat (make clean & make). Copy libfloat.a into buildroot/build_arm_nofpu/staging_dir/lib and run the buildroot make again (without cleaning). The build should end successfully. With this toolchain, mediOS will compile without any warning.

How would one compile a program for the Coldfire toolchain?

I'm trying to compile a simple hello world application to be run on uCLinux (2.4) which is running on a board with a Freescale Coldfire (MCF5280C) processor...and I'm not quite sure what to do here.
I know I need to compile with the correct version/tools from Freescale to target this hardware, so I downloaded and installed the Coldfire tool chain and verified that one I have is for my target:
mike#linux-4puc:/usr/local/m68k-elf/bin> ./gcc -v
Reading specs from /usr/local/lib/gcc-lib/m68k-elf/2.95.3/specs
gcc version 2.95.3 20010315 (release)(ColdFire patches - 20010318 from http://fiddes.net/coldfire/)(uClinux XIP and shared lib patches from http://www.snapgear.com/)
I tried a simple gcc "file" type command:
mike#linux-4puc:/home/mike> /usr/local/m68k-elf/bin/gcc test.c
/usr/local/m68k-elf/bin/ld.real: cannot open crt0.o: No such file or directory
collect2: ld returned 1 exit status
Which does not work at all.. so it's clearly more complex that than. The output almost looks like it wants me to build the tool chain before I use it?? Anyone ever done this before? Not sure what I need to do or if I just need some flags.
You might also try seeing if you have a command called m68k-elf-gcc or something along those lines. This is a common naming for cross-compilers.
As for your problem, it sounds like there is something wrong with your compiler setup. crt0.o is the object file that contains C-runtime setup code. The linker (what is actually giving the error) should know where this file is if setup properly.
When you installed you should have run make install as the last step without having modified anything since the make step. The configuration step will setup certain variables and such based on the path where it's supposed to be installed.
Where did you get a FreeScale toolchain? I took a look at their site and it seemed only third parties supplied C++ cross-compilers. In the toolchain I get from NetBurner (for use with their hardware) the crt0.o file exists under the gcc-m68k\m68k-elf\lib directory.

Debugging cross-compiled code: Linux->Windows

I'm cross-compiling a project from Linux to target Windows (using mingw). The output is a DLL and p-invoking into it from C# works, but debugging is very difficult. The build outputs a .o file, which can provide symbols to gdb, but basically all I can do there is break on exceptions and find the name of the function that was executing when the exception happened; not even the full stack trace. I can't debug with WinDbg because I don't have .pdb files.
This is an open source project set up to build on Linux; I believe their build process relies on several installed Linux packages to work.
Do I have any options here? Is there a utility that can convert .o files into .pdb? Or some program that can give me more information than gdb when debugging?
Try a IDE that support mingw. For example the open source Code::blocks.
Another possibility is to do it manually: compile it with debug symbols, start you application and attach the GDB debugger to it. It is also part of the MingW32 distribution. Then you can set your breakpoints and debug your application
But I guess using Code::Block is more comfortable
By the way, the GCC compiler does not generate pdb files because it is a propietary format
What xpol means is maybe: if you have a complete mingw installation then Code::blocks can use gdb to visualize a debugging session like it is done in Visual Studio or Eclipse. See chapter "Debugger" at http://www.codeblocks.org/features
You can generate a .pdb file using cv2pdb.exe from Visual D. This works even for programs not written in D if they were compiled with mingw. Once you've downloaded and installed Visual D cv2pdb.exe can be found at C:\Program Files (x86)\VisualD\cv2pdb\cv2pdb.exe.
You can run cv2pdb.exe against an executable like this:
cv2pdb.exe -n target.exe
This will produce a file called target.pdb. Assuming both target.pdb and target.exe are in the current director, you can then use windbg like this:
windbg -sflags 0x80030377 -y . -z target.dmp
In this case I'm also passing a minidump file as target.dmp. This can be omitted. The -sflags 0x80030377 option tells windbg to load target.pdb even though it thinks it doesn't match target.exe.
Note, that it can take windbg a very long time to load target.pdb. Just wait until it no longer says *BUSY* to the left of the command entry box.
Alternatively you can try DrMinGW.

Resources