I have two binary files generated via 'objcopy -O binary' from respective ELF files. The ELF files are built with arm-none-linux-gnueabi toolchains; one is from linaro gcc 4.6.2 and other is from codesourcery gcc 4.6.3.
I load the binary files into memory via Uboot. While the one built with Linaro executes as expected the one built with codesourcery crashes (most probably as) there is no error on Uboot prompt but the program seems to hang.
Using 'arm-none-linux-gnueabi-readelf -S' from binutils of respective toolchains does not show much difference between files except for address offsets. Are there any tools/techniques that can help in this kind of situation before I attempt runtime debugging on target.
Thanks!
The difference turned out to be compiler option -munaligned-access. Code Sourcery toolchain enables this by default for ARMv6 and later architectures.
http://gcc.gnu.org/gcc-4.7/changes.html
Although this appeared in upstream gcc in 4.7 version, Code Sourcery had added this support earlier in their tool chain.
To figure this out I tracked the data abort exception and then compiled the culprit file with -save-temps options. Comparing intermediate .s file provided the hint.
What I can advice you is to compare default flags both compilers were built with:
/path/to/cross-compiler/bin/arm-*-*-gcc -Q -v
And preprocessor definitions:
/path/to/cross-compiler/bin/arm-*-*-gcc -dM -E - < /dev/null
The reason why your code compiled using Linaro GCC works is fact, that
it may have some options enabled by default, when CodeSourcery one
may have not.
Related
I'm currently compiling sources using --coverage, with GCC. The generated .gcno files (and the instrumented libraries) are to be packed in a RPM and the code coverage evaluated on another, target platform.
Now, I'm having a problem getting the coverage data, because when I run the programs calling the instrumented code, I get messages telling me that I have a version mismatch. They look like:
".gcda:Version mismatch - expected A85R got B12R"
Now, I've seen this question: GCOV Version mismatch - expected 700e got 408R which says I must use the same toolchain when compiling and when executing the code.
I'm compiling using gcc 11.2.1, and gcov --version says the same thing, on the source platform.
On the target platform, gcc --version and gcov --version both give that very same version number.
The compiling and the testing is done on the same "physical" machine, but on different Docker containers. On both of them, the GCC version and gcov version are the same
I've done further testing: even on the Docker container where we compile, we cannot run the coverage, and get the same error. Or, to be more precise, when compiled using gcc 11, it will say "version mismatch". However, when compiled using gcc 8.5, it works.
The setup is that we have the GCC 11 toolset, which requires gcc 8.5 to install. By default, gcc 8.5 is enabled, and you have to use a script (provided by the gcc 11 toolset) to enable the later. That script updates different variables, like the PATH, LIBRARY_PATH, etc., to look at GCC 11 first.
However, I'm pretty sure it doesn't upgrade the libc.so library, and that it's the cause of the problem: I compiled two of our simplest libs (each of them having no dependencies whatsoever, except libc) with gcc 11, in coverage mode. Then I compiled a simple test program, without coverage, calling some functions from the two instrumented libs. I checked (with elfread -d), the program only links to these two libs (and libc).
Calling this test program while on the compilation container results in the Version mismatch error, which would lead me to conclude that our libc.so isn't compatible with gcc 11.
I wonder if there is a way to get a "native" gcc 11, instead of a "toolkit" package which has to be installed over a gcc 8.5 (my colleague in charge of creating the Docker containers tells me that for gcc 9 and above, there are only "toolkit" packages, requiring gcc 8.5 to install).
Our target architecture runs on Rocky Linux, and I think our development architecture is a Redhat, if it has any importance here.
When using clang v8.0.0 on Windows (from llvm prebuilt binaries) with -g or -gline-tables-only source map tables are not being picked up by gdb or lldb debuggers.
Upon including -g flag file grows in size (which is to be expected) yet neither gdb nor lldb pickes the source up
When compiled with gcc though (with -g flag) source files are detected by debugger.
I have tried running the same command (clang -g <codefile>) on macOS High Sierra (clang -v says it is Apple LLVM version 10.0.0 (clang-1000/10.44.4)) where there source files are being picked up by lldb. So I guessed it is localized to my windows instance or llvm for windows build.
P.S. output of clang -v on windows:
clang version 8.0.0 (tags/RELEASE_800/final)
Target: x86_64-pc-windows-msvc
Thread model: posix
InstalledDir: C:\Program Files\LLVM\bin
On Windows, Clang is not self-sufficient (at least not the official binaries). You need to have either GCC or MSVC installed for it to function.
As Target: x86_64-pc-windows-msvc indicates, by default your Clang is operating in some kind of MSVC-compatible mode. From what I gathered, it means using the standard library and other libraries provided by your MSVC installation, and presumably generating debug info in some MSVC-specific format.
Add --target=x86_64-w64-windows-gnu to build in GCC-compatible mode. (If you're building for 32 bits rather than 64, replace x86_64 with i686). This will make Clang use headers & libraries provided by your GCC installation, and debug info should be generated in a GCC-compatible way. I'm able to debug resulting binaries with MSYS2's GDB (and that's also where my GCC installation comes from).
If you only have GCC installed and not MSVC, you still must use this flag.
How do I know this is the right --target? This is what MSYS2's Clang uses, and I assume they know what they're doing. If you don't want to type this flag every time, you can replace the official Clang with MSYS2's one, but I'm not sure if it's the best idea.
(I think they used to provide some patches to increase compatibility with MinGW, but now the official binaries work equally well, except for the need to specify the target. Also, last time I checked their binary distribution was several GB larger, due to their inability to get dynamic linking to work. Also some of the versions they provided were prone to crashing. All those problems come from them building their Clang with MinGW, which Clang doesn't seem to support very well out of the box. In their defence, they're actively maintaining their distribution, and I think they even ship libc++ for Windows, which the official distribution doesn't do.)
In my 64-bit Solaris, my gcc by default will generate 32-bit executable file (for generating 64-bit executable file, need add "-m64" compile option) by default. While in my 64-bit Linux, my gcc will generate 64-bit executable file by default. I try to find the cause in gcc website, but unfortunately, there are so many related options (--with-arch, --with-cpu, --with-abi, etc). From the document, I can't see which can determine generating 32-bit or 64-bit executable file.
Could anyone give some advices on this issue?
It depends on how the compiler is installed, which really comes down to the distribution and possibly install options. If there is any doubt and need for certainty, simply include the -m option; it does not hurt to use -m32 when 32-bit is the default, and likewise for -m64 when 64-bit is the default.
When you compile gcc, you use the --target option to specify the appropriate system you want to generate the compiler for. For knowing what all targets GCC supports, you can either check gcc/configure file or oogle through the gcc/config/ folder. Once you generate the compiler, the "compile" command, i.e., gcc source.c -o object.o will always generate object for the default target you have compiled gcc for.
However, you may be able to generate objects for various variations around the specified target. E.g. you may be able to generate both 32-bit and 64-bit binaries for 64-bit systems.
As an example, configure --target=mips64-elf will generate the gcc compiler for the 64-bit mips target. Once the compiler is generated, whenever you type in gcc -c source.c -o object.o, a 64-bit mips object file will be generated.
So if you type in gcc -v on both of your systems in question, you will see how the gcc was configured to begin with, and that should answer your concern.
At the document you referred, please grep for "enable-targets" option.
I'm trying to compile a program to run on a Linux powered board, which has an ARM926EJ-S processor. So I've installed Debian embedded cross-development toolchain, and tried compiling an Hello World with in gcc with -march=armv5te . When I tried running the binary on the board it crashed with file not found errors (due to library versions), after that I've tried compiling with -static flag and I got a seg fault (0x0000827c in __libc_start_main (), said mr gdb trough gdbserver).
Any idea on what to do here to get something running?
Apparently the solution is to try as many toolchains as you can find. Eventually you'll find the one that works, after spending a few too many hours compiling toolchains. uClibc buildroot in this case.
You can find toolchains which support ARM926EJ-S on Linaro Page. Use the most recent arm-linux-gnueabi from Linaro project. I am currently using a version with gcc 4.9.4 which you can find here
It is recommended to use -mcpu=arm926ej-s instead of -march and -mtune. See gcc documentation because it combines -march and -mtune for your specified processor. It was deprecated for x86, but not for arm.
Other possibility could be building your own toolchain via crosstools-ng. But the Linaro toolchains are working out of the box if you don't need some specific setting (for example only using static libraries).
Has someone infos how to build a llvm+clang toolchain using binutils and newlib and how to use it?
host: Linux, AMD64
target: cortex-m3, stm32
c-lib: newlib
assembler: gnu as
I created a firmware framework - PolyMCU https://github.com/labapart/polymcu - that is based on CMake that support GCC and LLVM. Because it is based on CMake you can build your firmware on Linux/Windows/MacOS.
It also uses Newlib - it looks all your requirements are there!
I also wrote a blog where I compared GCC and LLVM build size on ARM Cortex-M: http://labapart.com/blogs/3-the-importance-of-the-toolchain-version-in-embedded-space
Interesting results, Clang generated code is not much bigger than GCC on Cortex-M...
Unfortunately, right now clang does not support flexible cross-compilation settings. So, most probably you will need to invoke necessary tools with all necessary arguments.
Start with building llvm + clang using --target=thumbv7-eabi configure argument (note that you will need llvm + clang as of yesterday for this). You might want to specify --enable-targets=arm as well. This will instruct clang to generate code for thumb by default. After this you can invoke clang -mcpu=cortex-m3 to generate the code for you.
You will have to provide all necessary include / library paths by hands via -I / -L, etc.
If you're happy with some C++ hacking, you can write necessary "HostInfo", so it will invoke the right tools and provide right paths automagically.