I have a problem when building chromium for ARM platform. Here are some details about my host server:
Linux version 4.2.0-42-generic (buildd#lgw01-55) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) )
And I use Chromium version 53.0.2785.143. I tried to use gn to build the chromium, and here is my arguments in args.gn file:
target_cpu = "arm"
arm_tune = "generic-armv7-a"
arm_float_abi = "softfp"
Basically, I used these specific arguments above because of my ARM platform. And the gn command ran without errors. However, when building project with ninja, the following errors popped out:
ninja: Entering directory `out/Default_arm64'
[1/1] Regenerating ninja files
[296/46119] LINK ./minidump-2-core
FAILED: minidump-2-core
../../third_party/llvm-build/Release+Asserts/bin/clang++ -Wl,--fatal-warnings -fPIC -Wl,-z,noexecstack -Wl,-z,now -Wl,-z,relro -Wl,-z,defs -fuse-ld=gold -B../../third_party/binutils/Linux_x64/Release/bin -Wl,--icf=all -pthread --target=arm-linux-gnueabihf --sysroot=../../build/linux/debian_wheezy_arm-sysroot -L/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/lib/arm-linux-gnueabihf
-Wl,-rpath-link=/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/lib/arm-linux-gnueabihf
-L/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/usr/lib/arm-linux-gnueabihf
-Wl,-rpath-link=/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/usr/lib/arm-linux-gnueabihf
-Wl,-rpath-link=../Default_arm64 -Wl,--disable-new-dtags -o "./minidump-2-core" -Wl,--start-group #"./minidump-2-core.rsp"
-Wl,--end-group -ldl -lrt ld.gold: error: obj/breakpad/minidump-2-core/minidump-2-core.o uses VFP register
arguments, output does not
...
I am new to chromium and have no clue about what do those errors mean. So anybody knows how to work around? You are appreciated.
Note: I need my arm_float_abi attribute to be "softfp" according to my ARM platform. So please note I cannot change it to "hard". Also, when set float abi = "hard", there is no building errors.
ld.gold: error: obj/breakpad/minidump-2-core/minidump-2-core.o uses VFP register arguments, output does not
This a linking error to indicate that minidump-2-core cannot be linked, due to a mismatch in the floating point ABI: the object minidump-2-core.o is compiled for hard floats (the generated code takes advantage of the ARM VFP unit - "uses VFP register arguments"), but the target executable is requested to use soft floats (in which floating point support is emulated, rather than using specialized FP hardware instructions).
According to this bug report, Chromium should build fine with soft float.
My best guess is, try replacing softfp by just soft: arm_float_abi = "soft".
According to gcc documentation, softfp maintains the soft ABI but still 'allows the generation of code using hardware floating-point instructions', which could lead to the seen error.
If that won't work, you might want to check this tutorial on cross building Chromium for ARM:
https://unix.stackexchange.com/questions/176794/how-do-i-cross-compile-chromium-for-arm
I posted this question and finally solved it. I used my local tool chain on ARM platform and compiled it successfully with g++.
Related
I have a stm32f103 project that is initialized using stm32cubemx and I'm using neovim for editing and arm-none-eabi-gcc for compilation of code (whit auto-generated makefile).
I also have installed clangd LSP and also bear to generate compile_commands.json file. Everyting works fine except that there's two errors:
stdio.h file not found
Compiler generates FPU instructions for a device without an FPU (check __FPU_PRESENT)
I looked at core_cm3.h file and __FPU_USED is disabled, which is exactly what clang says.
/** __FPU_USED indicates whether an FPU is used or not.
This core does not support an FPU at all
*/
#define __FPU_USED 0U
But I couldn't find any line in my makefile flags that enables the FPU for compilation.
# fpu
# NONE for Cortex-M0/M0+/M3
# float-abi
# mcu
MCU = $(CPU) -mthumb $(FPU) $(FLOAT-ABI)
I also commented out $(FPU) and $(FLOAT-ABI), but the error still exists.
Although I can compile the project without any problems (because gcc has no complaints), but these errors are kind of on my nerve.
Is there a way to fix these errors? Or is there any gcc-based LSPs to use instead of clangd?
There's also ccls on neovim's LSP list but I was unable to install it.
s there a way to fix these errors?
https://clangd.llvm.org/config#files You can:
create clangd configuration file
specify -sysroot command to specify the location of your buildchain (/usr/arm-none-eabi/ on my system)
and other needed options (-isysroot -nostdlib etc.) if you use them.
I would advise anyway to move CMake and generate compile_command.json anyway.
is there any gcc-based LSPs to use instead of clangd?
I am not aware of any.
Erlang Run-Time System (ERTS) have a few drivers written in C language that used to interact with the OS or to access low-level resources, In my knowledge the ERTS compile these drivers at boot time to get ready for loading from Erlang code, the driver inet_drv.c is one of these drivers and it's used to handle networking tasks like creating sockets and listening or accepting new incoming connections.
I wanted to test this driver manually to get a general view of the default behaviour of the ERTS and to know how to implement drivers efficiently in the future, I tracked the Erlang Manual Reference to implement drivers that said: first write and compile the driver by an OS C Language Compiler, second load the driver from erlang code using erl_ddll module, finally link to the driver by a spawned Erlang process, so this is very simple and easy.
So I tried these steps with the driver inet_drv.c, I searched for it and tried to compile it with Clang Compiler which is the Default C Compiler of FreeBSD System :
cc inet_drv.c
after that there was an error saying that the file erl_driver.h is not defined, this header file is used in the driver's code as an included file (#include<erl_driver.h>) so I searched for it and add it's directory path to the cc command using the -I option to get the compiler search for the included file in this directory and I recompile it :
cc inet_drv.c -I/usr/ports....
after that, there was be another undefined file so I did the same thing for 5 or 6 times and finally, I add all needed paths for included files and the result is this command :
cc inet_drv.c
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/emulator/beam
-I/usr/local/lib/erlang/usr/include
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/emulator/sys/unix
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/include/internal
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/emulator/sys/common
-I/usr/ports/lang/erlang/work/stage/usr/local/lib/erlang/erts-10.3.5.14/include/internal
I was surprised by the result:13 errors and 7 warnings, the shell output and errors and warnings description are in the links below.
My question is : why these errors occurs ? What is the wrong in what I did ?
Since this driver works perfectly in response to the ERTS networking tasks, then it's compiled by the ERTS without errors and the ERTS should use an OS C Language Compiler which is Clang by default and should add included headers files as I did, so why this did not work when I tried to do ?
https://ibb.co/bbtFHZ7
https://ibb.co/sF8QsDx
https://ibb.co/Lh9cDCH
https://ibb.co/W5Gcj7g
First things first:
In my knowledge the ERTS compile these drivers at boot time
No, ERTS doesn't compile the drivers. inet_drv.c is compiled as part of Erlang/OTP and linked into the beam.smp binary.
inet_drv is not a typical driver. Quoting the How to Implement a Driver section of the documentation:
A driver can be dynamically loaded, as a shared library (known as a DLL on Windows), or statically loaded, linked with the emulator when it is compiled and linked. Only dynamically loaded drivers are described here, statically linked drivers are beyond the scope of this section.
inet_drv is a statically loaded driver, and as such doesn't need to be loaded with erl_ddll.
On to the compilation errors. All the compiler parameters are automatically added for you when you run make, so if you need to call the compiler manually, better just check the command line that make generated and start from that. Let's look at the build log for the Debian Erlang package. Searching for inet_drv we get this command line (line breaks added):
x86_64-linux-gnu-gcc -Werror=undef -Werror=implicit -Werror=return-type -fno-common \
-g -O2 -fno-strict-aliasing -I/<<PKGBUILDDIR>>/erts/x86_64-pc-linux-gnu -D_GNU_SOURCE \
-DHAVE_CONFIG_H -Wall -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes \
-Wdeclaration-after-statement -DUSE_THREADS -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS \
-D_POSIX_THREAD_SAFE_FUNCTIONS -DBEAMASM=1 -DLIBSCTP=libsctp.so.1 \
-Ix86_64-pc-linux-gnu/opt/jit -Ibeam -Isys/unix -Isys/common -Ix86_64-pc-linux-gnu \
-Ipcre -I../include -I../include/x86_64-pc-linux-gnu -I../include/internal \
-I../include/internal/x86_64-pc-linux-gnu -Ibeam/jit -Ibeam/jit/x86 -Idrivers/common \
-Idrivers/unix -c \
drivers/common/inet_drv.c -o obj/x86_64-pc-linux-gnu/opt/jit/inet_drv.o
Some of it will be different since you're building on FreeBSD, but the principle stands - most of the time you'll want to just run make instead of invoking the compiler directly, but if you need to invoke the compiler, it will be much easier to start with the command line that make generated for you.
I am using arm cross compiler on Intel machine and facing below issue. Using yocto build system.
| In file included from gpufiles.h:12:0,
| from gpufiles.cpp:7:
poky/build/tmp-glibc/sysroots/x86_64-linux/usr/lib/arm-oemllib32-linux-gnueabi/gcc/arm-oemllib32-linux-gnueabi/4.9.3/include/arm_neon.h:31:2: error: #error You must enable NEON instructions (e.g. -mfloat-abi=softfp -mfpu=neon) to use arm_neon.h
| #error You must enable NEON instructions (e.g. -mfloat-abi=softfp -mfpu=neon) to use arm_neon.h
I have added below flags in makefile.am:
AM_CPPFLAGS += -mfloat-abi=softfp -mfpu=neon
But I am seeing another issue here:
unrecognized command line option '-mfpu=neon'
Please help me to resolve this. Your help is much appreciated!!
Actually it depends on which kind of compiler you are using and which version it is.
For GCC/ARM v7 compiler/ARM v8 compiler, there are some minor difference about the compiler option support.
Please refer to below URL for ARM compiler information:
https://developer.arm.com/products/software-development-tools/compilers
I am trying to compile source of milcv7.7.8/ks_spectrum on a cluster with MPICC using version 11.1 20090511 when make the source in milcv7.7.8/ks_spectrum by 'make ks_spectrum_hisq' command then at last the error comes out as
com_mpi.o: In function initialize_machine':
../generic/com_mpi.c:(.text+0xb447): undefined reference to_mm_idivrem_epi32'
I know now that this function _mm_idivrem_epi32 is a part of ia32intrin.h file in intel compiler.
When I use the latest intel mpiicc on new cluster with 14.0.0 20130728 version of intel compiler then code compiles successfully.
So is there any way to tell linker to include function _mm_idivrem_epi32 location...
_mm_idivrem_epi32() is not a function but rather a compiler intrinsic. When properly handled, it is replaced with a call to __svml_idivrem4() from the Intel's Short Vector Math Library libsvml.
You are most likely being hit by a bug in ICC's auto-vectoriser. Try compiling the same source file with -no-vec and see it this has any effect. Or better use the newest ICC version that you have at your disposal.
I am currently working trying to develop software for a SAM7X256 microcontroller in C. The device is running contiki OS and I am using the yagarto toolchain.
While studying the map file (to try to figure out why the .text region had grown so much) I discovered that several kb of the .text region where assigned to unwind support (see below)
.text 0x00116824 0xee4 c:/toolchains/yagarto/bin/../lib/gcc/arm-none-eabi/4.6.2\libgcc.a(unwind-arm.o)
0x00116c4c _Unwind_VRS_Get
......
0x0011763c __gnu_Unwind_Backtrace
.text 0x00117708 0x1b0 c:/toolchains/yagarto/bin/../lib/gcc/arm-none-eabi/4.6.2\libgcc.a(libunwind.o)
0x00117708 __restore_core_regs
0x00117708 restore_core_regs
....
0x00117894 _Unwind_Backtrace
.text 0x001178b8 0x558 c:/toolchains/yagarto/bin/../lib/gcc/arm-none-eabi/4.6.2\libgcc.a(pr-support.o)
0x00117958 __gnu_unwind_execute
...
0x00117e08 _Unwind_GetTextRelBase
I have tried finding looking for some information on unwinding and found 1 and 2. However the following is still unclear to me:
When/why do I need unwinding support?
What part of my code is causing pr-support.o, unwind-arm.o and libunwind.o to be linked?
If applicable, how do I avoid linking the items below.
In case it is necessary I am including a link to the complete map file
Thanks in advance for your help
Edit 1:
Adding Linker commands
CC = arm-none-eabi-gcc
CFLAGSNO = -I. -I$(CONTIKI)/core -I$(CONTIKI_CPU) -I$(CONTIKI_CPU)/loader \
-I$(CONTIKI_CPU)/dbg-io \
-I$(CONTIKI)/platform/$(TARGET) \
${addprefix -I,$(APPDIRS)} \
-DWITH_UIP -DWITH_ASCII -DMCK=$(MCK) \
-Wall $(ARCH_FLAGS) -g -D SUBTARGET=$(SUBTARGET)
CFLAGS += $(CFLAGSNO) -O -DRUN_AS_SYSTEM -DROM_RUN -ffunction-sections
LDFLAGS += -L $(CONTIKI_CPU) --verbose -T $(LINKERSCRIPT) -nostartfiles -Wl,-Map,$(TARGET).map
$(CC) $(LDFLAGS) $(CFLAGS) -nostartfiles -o project.elf -lc Project.a
Several parts to this answer:
the unwinding library functions are pulled in from exception "personality routines" (__aeabi_unwind_cpp_pr0 etc.) that are mentioned in exception tables in some of the GCC library function modules.
your map file shows that bpapi.o (a module which contains integer division functions) pulls in this exception code. I don't see this in the latest YAGARTO, but I do it in _divdi3.o which is another integer division helper module. I can reproduce the effect of the unwinding code being pulled in by writing a trivial main() that does a 64-bit division.
the general reason for C code having (non-trivial) exception tables is so that C++ exceptions can be thrown "through" the C code when you arbitrarily mix C and C++ code in your application.
functions which can't throw or call throwing functions, should, if they have exception tables at all, only need trivial ones marked as CANTUNWIND, so that the unwinding library isn't pulled in. You'd expect division helpers to be in this category and in fact in CodeSourcery's distribution, _divdi3.o is marked CANTUNWIND.
so the root cause is that YAGARTO's GCC library (libgcc.a) is built inappropriately. Not quite incorrectly, as it should still work, but it's code bloat that you wouldn't expect in an embedded toolchain.
Can you do anything about this? There seems to be no simple way to get the GNU linker to ignore ARM exception sections, even with a /DISCARD/ script - the link to the text section overrides that. But what you can do is add a stub definition for the exception personality routine:
void __aeabi_unwind_cpp_pr0(void) {}
int main(void) { return *(unsigned long long *)0x1000 / 3; }
compiles to 4K using YAGARTO, compared to 14K without the stub. But you might want to investigate alternative GNU tools distributions too.
GCC has an option that eliminates exception handling.
-fno-exceptions
While I'm not familiar with yagarto to say for sure, it may have a similar option. On GCC, this option eliminates this overhead at the expense of support for standard exceptions.