I'm getting the following errors while trying to compile an ARM embedded C program (I'm using YAGARTO as my cross compiler). I'm trying to work out what this error means and what are the steps to correct it. From the research I've done so far, the issue it seems to be wfi, and wfe are not ASM instruction. How could I fix this?
\cc9e5oJe.s: Assembler messages:
\cc9e5oJe.s:404: Error: selected processor does not support ARM mode `wfi'
\cc9e5oJe.s:414: Error: selected processor does not support ARM mode `wfe'
\cc9e5oJe.s:477: Error: selected processor does not support ARM mode `wfi'
make: *** [STM32F10x_StdPeriph_Driver/src/stm32f10x_pwr.o] Error 1
You might miss some vital compiler options for your STM32F10x - which is a Cortex M3:
-mcpu=cortex-m3 -mthumb -mno-thumb-interwork -mfpu=vfp -msoft-float -mfix-cortex-m3-ldrd
Related
I'm trying a program about blender deformation. My laptop is amd64 and able to support i386.
faye#fish:/usr/bin$ dpkg --print-foreign-architectures
i386
faye#fish:/usr/bin$ dpkg --print-architecture
amd64
I have no much experience about makefile script. According to the google search info, I made two line code additions in the makefile.mk.
# -I option to help the compiler finding the headers
CFLAGS += $(addprefix -I, $(INCLUDE_PATH))
CC=gcc -m32
Here is the issue:
When I run any template OpenGL code with:
gcc test.c -lm -lpthread -lglut -lGL -lGLU -o test
It seems the code and the libs work correctly.
However, if I do the same to the makefile.mk(CC=gcc), it gives many errors in the following form:
/usr/bin/ld: i386 architecture of input file `../external/lib/libxxxxxxxx.a(xxxxxxxx.o)' is incompatible with i386:x86-64 output
if I use (CC = gcc -m32), the error will switch to:
/usr/bin/ld: cannot find -lglut
/usr/bin/ld: cannot find -lGL
/usr/bin/ld: cannot find -lGLU
I guess maybe there is something wrong in 64 bit os running 32 bit application and libs linking?
-m32, when used with a x86-64 compiler does not mean "compile this program for i386", but actually means "create a program for x86-64 CPUs using only instructions that operate on 32 bit registers".
What you have there is some binary that has been compiled for native i386 and now try to combine it with a program that's compile for x86-64 with just 32 bit registers. Those two don't fit together. The big question here of course is, why do you want to use those i386 binaries at all. There are some good reasons for using 32bit-x86-64 (half the size for pointers and which can massively reduce the memory bandwidth), but in general you want 64 bit binaries. So many problems of 32 bit memory management vanish by virtue of having vast amounts of address space.
Looks like extenal/lib is full of 32 bit precompiled archives. You could track each one down and recompile (or use shared libraries), but that'll be a massive PITA.
Just because your OS supports i386 doesn't mean you've got the libraries installed. In this case of that program, it's enough to install libc6-dev-i386 and freeglut3-dev:i386 packages.
PS: No need to edit anything. make CC='gcc -m32'
I just cross-compiled the clang compiler for ARM on my x86 machine with instructions from here. I am trying to compile a c code containing NEON intrinsics with clang compiler. It is giving error, (which i do not encounter with arm-linux-gnueabi-gcc)
$ clang -march=armv7-a -mfpu=neon -mfloat-abi=soft -integrated-as test.c -o test
In file included from test.c:2:
/home/junaid/llvm/build/Release+Asserts/bin/../lib/clang/3.2/include/arm_neon.h:28:2: error:
"NEON support not enabled"
The line test.c:2 is #include arm_neon.h
It will be the -mfloat-abi=soft. I'm surprised that works for you with an arm-none-linux-gnueabi toolchain.
For Neon support you will want to be targetting either the softfp, or hard float ABI, with either -mfloat-abi=softfp or -mfloat-abi=hard
I have a ROS node that contains code generated by Matlab coder. This code has been generated to make use of the NEON instruction set on ARM Cortex A CPUs. I want to compile this code on a Hardkernel Odroid XU4 (which runs on a Samsung Exynos5422 Cortex™-A15 2Ghz and Cortex™-A7 Octa core CPU). However I am not successful in compiling/linking my code.
I have added the the following compiler flags in the packages CMakeLists.txt:
-mfloat-abi=softfp -mfpu=neon -O2.
Yet, during compilation I get the following error message:
/usr/lib/gcc/arm-linux-gnueabihf/4.8/include/arm_neon.h:32:2: error:
#error You must enable NEON instructions (e.g. -mfloat-abi=softfp -mfpu=neon) to use arm_neon.h
This is followed by many more errors about unknown types:
/home/odroid/catkin_ws/src/vio_ros/src/codegen/mw_neon.c:12:2: error: unknown type name ‘float32x4_t’
/home/odroid/catkin_ws/src/vio_ros/src/codegen/mw_neon.c:36:2: error: unknown type name ‘int32x4_t’
...
And many more. All of these types seem to be defined in arm_neon.h
What do I need to do to be able to compile my code?
Thanks for your help
I have figured out what the problem was. Since some of the code being compiled in this C++ project was C code, I also have to set the compiler flags for C.
Including the following in the CMakeLists.txt makes the code compile:
set(NEON_FLAGS "-DENABLE_NEON -mfloat-abi=hard -mfpu=neon-vfpv4 -mcpu=cortex-a15 -Ofast")
set(CMAKE_CXX_FLAGS "-std=c++0x ${CMAKE_CXX_FLAGS} -Wno-format-security ${NEON_FLAGS}")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${NEON_FLAGS}")
I need to suppress "-arch x86_64 -arch i386" flags Waf is passing to GCC.
I am building an SDL/Opengl application. If I link against 32 bit SDL runtime I get error
Undefined symbols for architecture i386:
"_SDL_Quit", referenced from:
__del_video in SDL_functions.c.2.o
__init_video in SDL_functions.c.2.o
If I link against 64 bit SDL runtime, I get error "Undefined symbols for architecture x86_64"
The compiler is apparently using flags
-arch x86_64 -arch i386
I understand that this causes GCC on OSX to try to compile for both architectures. I want to either compile for 64 bit, or compile for 32 bit. How do I suppress the flags for one architecture?
I found out in my case that the double arch flags were originating here, specifically from distutils.sysconfig.get_config_var('LDFLAGS'). This returns the LDFLAGS that Python thinks you should link Python modules with. In my case, file $(which python) is a "Mach-O universal binary with 2 architectures", so Python thinks you should link with -arch x86_64 -arch i386 -Wl,F.
My problem was that I was building a Python native module that needed to link against Python and another library which was not built with both arches. When building my module with both arches, linking failed with "symbols not found", because both arches were not available in the third-party library.
Since waf unfortunately doesn't allow you to override its computed flags with your own flags, as Automake does, I could only fix this by messing directly with my ctx() object in my wscript:
for var in ['CFLAGS_PYEMBED', 'CFLAGS_PYEXT', 'CXXFLAGS_PYEMBED',
'CXXFLAGS_PYEXT', 'LINKFLAGS_PYEMBED', 'LINKFLAGS_PYEXT']:
newvar = []
for ix, arg in enumerate(ctx.env[var]):
if '-arch' not in (arg, ctx.env[var][ix - 1]):
newvar.append(arg)
ctx.env[var] = newvar
(This removes all -arch flags and their arguments from the relevant variables. Since I was also passing my own -arch flag in my CFLAGS, it now does not get overridden.)
I don't know of a way to issue a command/flag to suppress other flags. However, to compile for only 64 or 32 bits, you can use -m64 or -m32, respectively. Since you're compiling for both architectures, -m32 might be your only option because -m64 won't work for i386.
This question suggests that the best way to triangulate a polygon with holes is to use Shewchuk's Triangle library, but I'm having trouble getting it to compile on my mac OSX. It is a very popular program that has been around for a while, and therefore should be relatively easy to compile, I'm just inexperienced with C.
This is the error I'm getting:
$ make
cc -O -DLINUX -I/usr/X11R6/include -L/usr/X11R6/lib -o ./triangle ./triangle.c -lm
Undefined symbols:
"__FPU_SETCW", referenced from:
_exactinit in ccrEJvxc.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make: *** [triangle] Error 1
I'm trying to comment out certain flags (i.e. #define LINUX, etc.) but I get a different set of errors for each combination.
Could someone walk me through step-by-step how to compile (and possibly call) this program on a mac?
I managed to compile on OS X by removing the -DLINUX flag from the definition of CSWITCHES in the makefile, as fpu_control.h seems to be linux specific.
I don't believe that's a standard function and in any case I believe the Mac .. whose use of the Intel architecture post-dates SSE .. never had a reason to support 387-style FPU ops.
So your code is Linux-specific. You can either remove the linux specific code or implement do-nothing versions of its entry points.
I wouldn't do this myself, but you might get away with:
$ cat > /usr/include/fpu_control.h
#define _FPU_SETCW(cw) // nothing
#define _FPU_GETCW(cw) // nothing
Don't worry about the null implementations. You don't need to tweak the FPU exception and rounding modes.