I've used the following linker command to convert a TensorFlow model file myv2.tflite to "dummy" object file myv2.o, so that I can link it into an executable, to avoid having to drag this file separately:
ld --relocatable --format=binary --output=myv2.o myv2.tflite
I've done this on Ubuntu x86_64. However, I will need to do the same for Arm. I suppose I cannot reuse the same myv2.o? Do I have to regenerate the file on Arm using ld from an appropriate Arm toolchain?
I suppose I cannot reuse the same myv2.o?
You can't that; file is an ELF 64bit x86_64 ABI object file, you need it for your target architecture
Do I have to regenerate the file on Arm
no, there's no reason ld needs to run on an ARM machine,
using ld from an appropriate Arm toolchain?
exactly, just replace ld with your correctly targeting linker, e.g. arm-none-eabi-ld if this is for baremetal. Again, this is normal cross-development: no need to run on the target architecture, just build for the target architecture.
Related
I am trying to cross compile for my raspberry pi, unfortunately the pi has an older version of libstdc++ than my build machine and when I try to run my executable it says "./<exe_name>: /usr/lib/arm-linux-gnueabihf/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by ./<exe_name>). I've gotten it working by using "-static", but really I'd like to be able to tell both ld (the gcc linker) and lld (the clang linker) "Only look in these paths for any libraries" it keeps finding the system one and linking against it. I've rsynced the raspberry pi's /usr/lib and /lib directores over to the host machine and I'd like to say "use /path_to_raspberry_pi_rsync/lib and /path_to_raspberry_pi_rsync/usr/lib" only.
Bonus points for getting ld and lld to tell me what path it's using when it tries to link.
I'd like to be able to tell both ld (the gcc linker) and lld (the clang linker) "Only look in these paths for any libraries"
Both will do that if you supply proper -L/path/to/target/libraries with sufficient contents.
it keeps finding the system one and linking against it. I've rsynced the raspberry pi's /usr/lib and /lib directores over to the host machine and I'd like to say "use /path_to_raspberry_pi_rsync/lib and /path_to_raspberry_pi_rsync/usr/lib" only.
Your problem most likely stems from the fact that what you rsynced are runtime libraries, not actual development libraries. So if e.g. /path_to_raspberry_pi_rsync/usr/lib contains libstdc++.so.6, but doesn't contain libstdc++.so symlink, then the linker will keep looking for libstdc++.so., until it finds one in the system directory.
In addition, once you succeed limiting your link to just the "rsync"d libraries, it is likely that your link will fail with unresolved libstdc++ symbols. That is because you need a matching set of headers and libraries.
Your best bet is to obtain a proper toolchain targeting your runtime environment.
Bonus points for getting ld and lld to tell me what path it's using when it tries to link.
With ld, you can add -Wl,-t flag and it will tell you about each and every library and object file it opens. lld may support this flag as well.
I built a shared library on Ubuntu 14.04 for ARM platform. The file has compiled and build successfully. I can inspect exported symbols with nm command but when I check .so file header I got the information that architecture is unknown.
Is this library built correctly, why is the library architecture unknown ?
objdump -f libMyLib.so
libMyLib.so: file format elf32-little
architecture: UNKNOWN!, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x000033a0
You need to use the objdump binary provided by the toolchain of your target system (ARM), not from host system(x86_64).
As a example: I have setup a host system Linux x86_64 targeting openwrt mips, my toolchain folder has some files:
mips-openwrt-linux-gnu-ar
mips-openwrt-linux-gnu-as
mips-openwrt-linux-gnu-gcc
mips-openwrt-linux-gnu-ld
mips-openwrt-linux-gnu-objdump
mips-openwrt-linux-gnu-nm
These are the tools to manipulate programs for openwrt mips system, so instead of just calling objdump, I need to call ./mips-openwrt-linux-gnu-objdump -f <bin file> to read and get the proper output for the compiled file.
I am trying to build a gcc cross compiler. I understand that before compiling the cross compiler I need to have the target binutils built already. why the building of the compiler need the target binutils ? the compiler alone only takes high level code and turn it to the assembly that I defined it in the compiler sources. so why do I need the target bintools for compiling the cross compiler ? It is written in all of the cross compiler documentation that I need them to be build before compiling the cross compiler. (e.g. http://wiki.osdev.org/Building_GCC and http://www.ifp.illinois.edu/~nakazato/tips/xgcc.html).
GCC needs an assembler to transform the assembly it generates into object files (machine code), and a linker to link object files together to produce executables and shared libraries. It also needs an archiver to produce static libraries/archives.
Those three are usually provided by the binutils package (among other useful tools): the GNU assembler as, linker ld and the ar archiver.
Your key question seems to be:
why the building of the compiler need the target binutils ?
As described in Building a cross compiler, part of the build process for a GNU cross-compiler is to build runtime libraries for the target using the newly-compiled cross-compiler. So the binutils for the target need to be present for that step to succeed.
It may be possible to build the cross-compiler first, using empty files for the subset of binutils components that gcc needs - such as as and ld and ar and ranlib - then build and install the target binutils components into the proper locations, then build the target runtime libraries.
But it would be less error-prone to do things the following way (and the documentation recommends this): build binutils for the target first, place the specified executables in gcc's source tree, then build the cross-compiler.
The binutils (binary utilities) provide low-level handling of
binary files, such as linking, assembling, and parsing ELF files. The GCC
compiler depends on these tools to create an executable, because it generates
object files that binutils assemble into an executable image.
ELF is the format that Linux uses for binary executable
files. The GCC compiler relies on binutils to provide much of the platform-specific functionality.
Here your are cross-compiling for some other architecture not for x86. So resulting binutils are platform-specific
while configuring has to give --host!=target. i.e --host=i686-pc-linux-gnu
where --target=arm-none-linux-gnueabi.
So resulting executable are not same which host already having binutils.
addition
the basic things needs to be known.
The build machine, where the toolchain is built.
The host machine, where the toolchain will be executed.
The target machine, where the binaries created by the
toolchain are executed.
So binutils will be having tools to generate and manipulate binaries
for a given CPU architecture. Not for the one host is using
I am trying to do a simulate with Simcore Alpha/Functional Simulator and I need to create an image file but it is giving an error like "This is not Coff Executable" how can I create an Executable Coff file from a C source in linux?
In order to do this, you'll need a cross compiling gcc that is built to output COFF files. You may need to build gcc yourself if you can't find a pre-built one.
After you download gcc, you will need to configure it. The important option is --target; so if you want to target an Alpha architecture you would do:
configure --target=alpha-coff
I would also recommend you add a prefix to the binaries and install them into a different directory so you have no problems with the compiler interacting with the system compiler:
configure --target=alpha-coff --prefix=/opt/cross-gcc --program-prefix=coff-
(this will create coff-gcc in /opt/cross-gcc/bin, you can tweak those if want something different).
Linux executable format is called ELF.
COFF is a common file format for object modules, which are linked to make an ELF file or an EXE file
In your case if you have access to gcc, you can try
gcc mysource.c -o myprogram
I am trying to run a simple program on an powerpc embedded system without any operating system. I am using GNU compiler-linker tools and PSIM as simulator. I've written my own very simple Linker Directive file.
I've used a global variable in my static library and want to use that variable in my sample program. But while linking the sample program GNU ld gives an error and stops. It says that it cannot find rela.dyn in linker directive file. Actually I do not want to use dynamically relocatable library, because I dont have a dynamic loader. What am I doing wrong?
Hard to say without more info. If you don't have an underlying OS, did you use -ffreestanding to avoid linking in the platform runtime?
Edit: -ffreestanding requires -shared? -ffreestanding means to compile to a non-hosted environment. How can such an environment support shared libraries?
-ffreestanding, as Solar says. If that fails, run ld with the --verbose option to see exactly what it is trying to link in: that will enable you to debug further.