stdio.h and FPU errors in Neovim using clangd LSP - c

I have a stm32f103 project that is initialized using stm32cubemx and I'm using neovim for editing and arm-none-eabi-gcc for compilation of code (whit auto-generated makefile).
I also have installed clangd LSP and also bear to generate compile_commands.json file. Everyting works fine except that there's two errors:
stdio.h file not found
Compiler generates FPU instructions for a device without an FPU (check __FPU_PRESENT)
I looked at core_cm3.h file and __FPU_USED is disabled, which is exactly what clang says.
/** __FPU_USED indicates whether an FPU is used or not.
This core does not support an FPU at all
*/
#define __FPU_USED 0U
But I couldn't find any line in my makefile flags that enables the FPU for compilation.
# fpu
# NONE for Cortex-M0/M0+/M3
# float-abi
# mcu
MCU = $(CPU) -mthumb $(FPU) $(FLOAT-ABI)
I also commented out $(FPU) and $(FLOAT-ABI), but the error still exists.
Although I can compile the project without any problems (because gcc has no complaints), but these errors are kind of on my nerve.
Is there a way to fix these errors? Or is there any gcc-based LSPs to use instead of clangd?
There's also ccls on neovim's LSP list but I was unable to install it.

s there a way to fix these errors?
https://clangd.llvm.org/config#files You can:
create clangd configuration file
specify -sysroot command to specify the location of your buildchain (/usr/arm-none-eabi/ on my system)
and other needed options (-isysroot -nostdlib etc.) if you use them.
I would advise anyway to move CMake and generate compile_command.json anyway.
is there any gcc-based LSPs to use instead of clangd?
I am not aware of any.

Related

CMake with an embedded C compiler that doesn't support "-o"

I'm writing firmware using an older C compiler called HC12. Currently I use GNU Make for the build system. I'm hoping to start using CMake, but ran into an issue:
The compiler does not support some standard C compiler syntax, namely the "-o" flag.
I've made a custom toolchain file and added all my c flags, but CMake seems to implicitly add the "-o" to compile source files, in the generated GNU Makefiles.
The HC12 compiler allows me to use -objn="name_of_file" to specify the output filename.
My question: Is there a way to get CMake to stop putting the implicit "-o" so that I can use this compiler?
I know there is a GCC port for this processor, but changing compilers at this point isn't an option.
You could take a file like the Modules/Compiler/ti.cmake as a reference and create one for your HC12 compiler, and also change these macros defined there:
# the input file options from TI, change to what your compiler needs
# They are used below in the command where ${lang} is either C, CXX or ASM
set(__COMPILER_HC12C_SOURCE_FLAG_C "--c_file")
set(__COMPILER_HC12C_SOURCE_FLAG_CXX "--cpp_file")
set(__COMPILER_HC12C_SOURCE_FLAG_ASM "--asm_file")
# add output file option
set(__COMPILER_HC12C_OUTPUT_FLAG_C "--objn")
macro(__compiler_HC12C lang)
# ...
set(CMAKE_${lang}_COMPILE_OBJECT "<CMAKE_${lang}_COMPILER> --compile_only ${__COMPILER_HC12C_SOURCE_FLAG_${lang}}=<SOURCE> <DEFINES> <INCLUDES> <FLAGS> ${__COMPILER_HC12C_OUTPUT_FLAG_${lang}}=<OBJECT>")
# --------------------------------------- ---------------------------------------
# ...
endmacro()
Hope, this will help.

How to implement Erlang Driver As Default efficient Implementation

Erlang Run-Time System (ERTS) have a few drivers written in C language that used to interact with the OS or to access low-level resources, In my knowledge the ERTS compile these drivers at boot time to get ready for loading from Erlang code, the driver inet_drv.c is one of these drivers and it's used to handle networking tasks like creating sockets and listening or accepting new incoming connections.
I wanted to test this driver manually to get a general view of the default behaviour of the ERTS and to know how to implement drivers efficiently in the future, I tracked the Erlang Manual Reference to implement drivers that said: first write and compile the driver by an OS C Language Compiler, second load the driver from erlang code using erl_ddll module, finally link to the driver by a spawned Erlang process, so this is very simple and easy.
So I tried these steps with the driver inet_drv.c, I searched for it and tried to compile it with Clang Compiler which is the Default C Compiler of FreeBSD System :
cc inet_drv.c
after that there was an error saying that the file erl_driver.h is not defined, this header file is used in the driver's code as an included file (#include<erl_driver.h>) so I searched for it and add it's directory path to the cc command using the -I option to get the compiler search for the included file in this directory and I recompile it :
cc inet_drv.c -I/usr/ports....
after that, there was be another undefined file so I did the same thing for 5 or 6 times and finally, I add all needed paths for included files and the result is this command :
cc inet_drv.c
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/emulator/beam
-I/usr/local/lib/erlang/usr/include
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/emulator/sys/unix
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/include/internal
-I/usr/ports/lang/erlang/work/otp-OTP-21.3.8.18/erts/emulator/sys/common
-I/usr/ports/lang/erlang/work/stage/usr/local/lib/erlang/erts-10.3.5.14/include/internal
I was surprised by the result:13 errors and 7 warnings, the shell output and errors and warnings description are in the links below.
My question is : why these errors occurs ? What is the wrong in what I did ?
Since this driver works perfectly in response to the ERTS networking tasks, then it's compiled by the ERTS without errors and the ERTS should use an OS C Language Compiler which is Clang by default and should add included headers files as I did, so why this did not work when I tried to do ?
https://ibb.co/bbtFHZ7
https://ibb.co/sF8QsDx
https://ibb.co/Lh9cDCH
https://ibb.co/W5Gcj7g
First things first:
In my knowledge the ERTS compile these drivers at boot time
No, ERTS doesn't compile the drivers. inet_drv.c is compiled as part of Erlang/OTP and linked into the beam.smp binary.
inet_drv is not a typical driver. Quoting the How to Implement a Driver section of the documentation:
A driver can be dynamically loaded, as a shared library (known as a DLL on Windows), or statically loaded, linked with the emulator when it is compiled and linked. Only dynamically loaded drivers are described here, statically linked drivers are beyond the scope of this section.
inet_drv is a statically loaded driver, and as such doesn't need to be loaded with erl_ddll.
On to the compilation errors. All the compiler parameters are automatically added for you when you run make, so if you need to call the compiler manually, better just check the command line that make generated and start from that. Let's look at the build log for the Debian Erlang package. Searching for inet_drv we get this command line (line breaks added):
x86_64-linux-gnu-gcc -Werror=undef -Werror=implicit -Werror=return-type -fno-common \
-g -O2 -fno-strict-aliasing -I/<<PKGBUILDDIR>>/erts/x86_64-pc-linux-gnu -D_GNU_SOURCE \
-DHAVE_CONFIG_H -Wall -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes \
-Wdeclaration-after-statement -DUSE_THREADS -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS \
-D_POSIX_THREAD_SAFE_FUNCTIONS -DBEAMASM=1 -DLIBSCTP=libsctp.so.1 \
-Ix86_64-pc-linux-gnu/opt/jit -Ibeam -Isys/unix -Isys/common -Ix86_64-pc-linux-gnu \
-Ipcre -I../include -I../include/x86_64-pc-linux-gnu -I../include/internal \
-I../include/internal/x86_64-pc-linux-gnu -Ibeam/jit -Ibeam/jit/x86 -Idrivers/common \
-Idrivers/unix -c \
drivers/common/inet_drv.c -o obj/x86_64-pc-linux-gnu/opt/jit/inet_drv.o
Some of it will be different since you're building on FreeBSD, but the principle stands - most of the time you'll want to just run make instead of invoking the compiler directly, but if you need to invoke the compiler, it will be much easier to start with the command line that make generated for you.

Chromium version 53 for ARM gn build issue

I have a problem when building chromium for ARM platform. Here are some details about my host server:
Linux version 4.2.0-42-generic (buildd#lgw01-55) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) )
And I use Chromium version 53.0.2785.143. I tried to use gn to build the chromium, and here is my arguments in args.gn file:
target_cpu = "arm"
arm_tune = "generic-armv7-a"
arm_float_abi = "softfp"
Basically, I used these specific arguments above because of my ARM platform. And the gn command ran without errors. However, when building project with ninja, the following errors popped out:
ninja: Entering directory `out/Default_arm64'
[1/1] Regenerating ninja files
[296/46119] LINK ./minidump-2-core
FAILED: minidump-2-core
../../third_party/llvm-build/Release+Asserts/bin/clang++ -Wl,--fatal-warnings -fPIC -Wl,-z,noexecstack -Wl,-z,now -Wl,-z,relro -Wl,-z,defs -fuse-ld=gold -B../../third_party/binutils/Linux_x64/Release/bin -Wl,--icf=all -pthread --target=arm-linux-gnueabihf --sysroot=../../build/linux/debian_wheezy_arm-sysroot -L/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/lib/arm-linux-gnueabihf
-Wl,-rpath-link=/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/lib/arm-linux-gnueabihf
-L/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/usr/lib/arm-linux-gnueabihf
-Wl,-rpath-link=/home/miaozixiong/workspace/chromium/src/build/linux/debian_wheezy_arm-sysroot/usr/lib/arm-linux-gnueabihf
-Wl,-rpath-link=../Default_arm64 -Wl,--disable-new-dtags -o "./minidump-2-core" -Wl,--start-group #"./minidump-2-core.rsp"
-Wl,--end-group -ldl -lrt ld.gold: error: obj/breakpad/minidump-2-core/minidump-2-core.o uses VFP register
arguments, output does not
...
I am new to chromium and have no clue about what do those errors mean. So anybody knows how to work around? You are appreciated.
Note: I need my arm_float_abi attribute to be "softfp" according to my ARM platform. So please note I cannot change it to "hard". Also, when set float abi = "hard", there is no building errors.
ld.gold: error: obj/breakpad/minidump-2-core/minidump-2-core.o uses VFP register arguments, output does not
This a linking error to indicate that minidump-2-core cannot be linked, due to a mismatch in the floating point ABI: the object minidump-2-core.o is compiled for hard floats (the generated code takes advantage of the ARM VFP unit - "uses VFP register arguments"), but the target executable is requested to use soft floats (in which floating point support is emulated, rather than using specialized FP hardware instructions).
According to this bug report, Chromium should build fine with soft float.
My best guess is, try replacing softfp by just soft: arm_float_abi = "soft".
According to gcc documentation, softfp maintains the soft ABI but still 'allows the generation of code using hardware floating-point instructions', which could lead to the seen error.
If that won't work, you might want to check this tutorial on cross building Chromium for ARM:
https://unix.stackexchange.com/questions/176794/how-do-i-cross-compile-chromium-for-arm
I posted this question and finally solved it. I used my local tool chain on ARM platform and compiled it successfully with g++.

GCC cross compiler (for ARM micro) complains about 'non supported floating point ABI' at a function where no FP instruction is present

I have followed the standard procedure to get my new Nucleo-F767ZI board from STMicroelectronics up and running. The procedure is as follows:
STEP 1
I downloaded the SW4STM32 IDE from AC6. This is an Eclipse-based IDE for programming the STM32 microcontroller series from STMicroelectronics.
STEP 2
I downloaded the latest CubeMX software from STMicroelectronics. CubeMX is a java-based tool in which you can configure a few basic settings for your microcontroller: clock speed, realtime-os, peripherals, ... . After that, CubeMX spits out a folder with a bunch of c-source files in it. That's basically your project to start from.
STEP 3
I open the SW4STM32 IDE and import the project that CubeMX just generated. I do not change or add any code. I just click the build button, hoping that the project will compile to an executable .bin file (and perhaps also a .elf file). This is where things go wrong.
THE ERROR
The compiler finds (or thinks that he finds) an error in the following function in the FreeRTOS file portmacro.h:
171 /* Generic helper function. */
172 __attribute__( ( always_inline ) ) static inline uint8_t ucPortCountLeadingZeros( uint32_t ulBitmap )
173 {
174 uint8_t ucReturn;
175
176 __asm volatile ( "clz %0, %1" : "=r" ( ucReturn ) : "r" ( ulBitmap ) );
177 return ucReturn;
178 }
The error message I get from the compiler is:
line 173 : sorry, unimplemented: Thumb-1 hard-float VFP ABI
Now there are several reasons why I don't understand this particular error message:
>> Issue 1
The first issue is about the location of the error. Line 173 is the line where the opening curly brace is located. Why on earth would an error message refer to that line?
>> Issue 2
Secondly, I do not understand why the error message mentions the hardware floating point unit on my microcontroller. I cannot see any floating point instruction in the ucPortCountLeadingZeros(..) function.
>> Issue 3
I have opened the GCC compiler settings in the Eclipse project. Just to take a look at the default settings. I do not change anything. Here are two screenshots:
The first screenshot shows that the following option is selected:
Instruction set : Thumb II
The second screenshot shows that the following options are given to GCC:
-mfloat-abi=hard # Inform GCC that this micro has a hardware floating point unit
-mfpu=fpv5-d16 # The hardware floating point unit is double precision
-mthumb # ARM Thumb instruction set
So, what Thumb instruction set version is actually selected. Thumb I or Thumb II?
Please help me to find out why this generated project from CubeMX doesn't compile. I am very thankful for any tips and hints.
EDIT :
The complete set of options passed on to the GCC compiler (as visible in the second screenshot) is the following:
-mthumb
-mfloat-abi=hard
-mfpu=fpv5-d16
-D__weak="__attribute__((weak))"
-D__packed="__attribute__((__packed__))"
-DUSE_HAL_DRIVER
-DSTM32F767xx
-I../Inc
-I../Drivers/STM32F7xx_HAL_Driver/Inc
-I../Drivers/STM32F7xx_HAL_Driver/Inc/Legacy
-I../Middlewares/Third_Party/FreeRTOS/Source/portable/GCC/ARM_CM7/r0p1
-I../Middlewares/Third_Party/FreeRTOS/Source/include
-I../Middlewares/Third_Party/FreeRTOS/Source/CMSIS_RTOS
-I../Drivers/CMSIS/Include
-I../Drivers/CMSIS/Device/ST/STM32F7xx/Include
-Os
-g3
-Wall
-fmessage-length=0
-ffunction-sections
-c
-fmessage-length=0
As noted in the comments below the question, the project generated by CubeMX did not specify the -mcpu option to the compiler. So one should manually add this option to the compiler, the linker and the assembler:
-mcpu=cortex-m7
If you do that, it builds without trouble.
Adding the option to the compiler, linker and assembler is a bit tricky. I'll explain in detail how to do it.
1. Adding the option to the compiler
> Right-click on your project folder in the left window of Eclipse. Click on Properties in the pop-up window.
> In the properties window, select C/C++ Build > Settings on the left.
> Now you should see 3 options in the middle of the window: MCU GCC Compiler, MCU GCC Linker and MCU GCC Assembler. Click on the first one, and select Miscellaneous.
> You should see the other flags line. Add the following option to that line: -mcpu=cortex-m7.
2. Adding the option to the assembler
> In the same properties window, select MCU GCC Assembler > General.
> You should see the Assembler flags line. Add the option -mcpu=cortex-m7 to that line.
3. Adding the option to the linker
> Again in the properties window, select MCU GCC Linker.
> You should see the Command line pattern line, with the following text in it:
${COMMAND} ${FLAGS} ${OUTPUT_FLAG} ${OUTPUT_PREFIX}${OUTPUT} ${INPUTS}
> Add the option -mcpu=cortex-m7 to that line.
After doing all that, both the compiler, the assembler and the linker know that you want to build for the Cortex-M7 architecture. It still bothers me a bit that CubeMX didn't put that by default in the configuration file of the generated project. But at least, we know the workaround now..
Many thanks to #Notlikethat, #Jean-Louis Bonnaffe and #rjp for bringing me to this solution with the useful comments :-)
I already faced similar issue. The IDE has to be updated to support new board/chip. IAR workbench V7.50 does not support F767ZI, but V7.60 does.
SW4STM32 Update : "Help" >> "Check for updates..." then restart Eclipse;

How is the CPU symbol resolved in this C sample code?

I ran into the following code in location.c for the apache jsvc java daemon.
char *location_jvm_cfg[] = {
"$JAVA_HOME/jre/lib/jvm.cfg", /* JDK */
"$JAVA_HOME/lib/jvm.cfg", /* JRE */
"$JAVA_HOME/jre/lib/" CPU "/jvm.cfg", /* JDK */
"$JAVA_HOME/lib/" CPU "/jvm.cfg", /* JRE */
NULL,
};
I grepped through the source code to find out the CPU macro is expanded in the code "$JAVA_HOME/jre/lib/" CPU "/jvm.cfg" but could not find such a MACRO defined.
I am not really sure if CPU is a C Macro or some other thing that is being configured the autoconf tools.
how is the above CPU value being substituted for the real CPU value?
The problem I am facing is that when I build jsvc on Solaris with CFLAGS and LDFLAGS set to -m64 the generated 64 bit solaris binary tries to load the jvm .so files from $JAVA_HOME/jre/lib/sparc/jvm.cfg instead of $JAVA_HOME/jre/lib/sparcv9/jvm.cfg
UPDATE
Running ./configure that ships with JSVC with the following command line does the right thing
configure --with-java=/path/to/jdk1.7.0_45 --host=sparcv9-sun-solaris2.10 CFLAGS="-m64" LDFLAGS="-m64"
the extra --host=sparcv9-sun-solaris2.10 causes the generated gcc command to be
gcc -m64 -DOS_SOLARIS -DDSO_DLFCN -DCPU=\"sparcv9\" -Wall -Wstrict-prototypes
Instead of
gcc -m64 -DOS_SOLARIS -DDSO_DLFCN -DCPU=\"sparc\" -Wall -Wstrict-prototypes
which is what was causing the generated 64 bit jsvc binary to try to link against the 32 bit so files instead of the 64 bit so files.
It absolutely must be a preprocessor define. Nothing else would work in that code.
For making configure use different CPUs, it may be possible that the configure script takes a configuration triplet. That might look like 'i686-unknown-gnu-linux'
Apparently configure.guess does the work of figuring this out. If you specify one of these triplets (quadruplets?) on the configure command line it might think it is building in a cross-compiler, but it should work.
The generated configure script adds -DCPU to CFLAGS, based on the value of configure --host, which defaults to configure --build, which defaults to a guessed value.

Resources