I'm a beginner in programming embedded devices.
While cross compiling a cryptography algorithm (using openssl), it generates an error as shown below. The program doesn't have a problem, since it runs well in the host system (Ubuntu 14).
Did anyone come across this problem ? I tried some of the already posted related questions on cross compilation but didn't solve my problem.
Thanks.
For headers issue:
Locate the headers and include it using -I switch while compilation.
For linking ussue:
$ locate libcrypto.so
You will get the directory libcrypto resides. Let's say the directory is: target_usr/lib/libcrypto.so
Now use the following command to ensure correct linking:
$ arm-linux-gnueabi-gcc hashSHA.c -Ltarget_usr/lib -lcrypto
Also make sure to add appropriate include flag and prefer to use some warning and optimization flags (-W -Wall -O2 for example)
Related
I'm exploring using clang as the compiler for ARM embedded development. As clang doesn't have an equivalent of .spec files, I'm having trouble convincing clang to link against libc_nano. How could I either tell clang to not link against any libraries by default so I can specify the correct library, or rewrite the -lc command to -lc_nano?
The command I'm trying to run is:
clang -target arm-none-eabi -mcpu=cortex-a5 -mfpu=neon-vfpv4 -mfloat-abi=hard -march=armv7-a main.c
Currently I get this error message:
/usr/lib/llvm-6.0/bin/ld.lld: error: unable to find library -lc
EDIT: I've noticed that clang has a -fno-autolink which according to the help text: Disable generation of linker directives for automatic library linking. However it doesn't seem to do anything?
EDIT2: I'm aware I could abuse symlinks to achieve this. I would like to avoid symlinks in this case as it can make the build system brittle.
Upon further google-fu and grep-fu, it turns out the answer was staring at me the entire time. Clang has a -nodefaultlibs that does the trick and prevents default linker directives. Although strangely it wasn't documented in --help.
You can build fake libc.a without any functions inside and use it together with libc_nano.
I have a video module and I am compiling with arm-eabi-gcc cross compiler. I used following command to compile
$ arm-eabi-gcc -O2 -DMODULE -D__KERNEL__ -W -Wall -isystem /lib/modules/uname -r/build/include panel-xxxxxxx.c.
I got the following error
In file included from /lib/modules/3.13.0-32-generic/build/include/linux/types.h:5:0,
from /lib/modules/3.13.0-32-generic/build/include/linux/list.h:4,
from /lib/modules/3.13.0-32-generic/build/include/linux/module.h:9,
from panel-gis317.c:17:
/lib/modules/3.13.0-32-generic/build/include/uapi/linux/types.h:4:23: fatal error: asm/types.h: No such file or directory
compilation terminated.
And after searching on google, I found that I need to specify hardware architecture but I could not find the right usage to use arch with gccon the command line.
Can anyone please suggest me the what flags can I use to cross-compile a give .cfile(module) on the command line without using Makefile
Note: I am doing this to do insmod of .ko module on the hardware for test purpose.
BTW with the help of .o file, can we know which cross-compiler is used to compile the .c file
With the linux kernel architecture specific includes are in arch//include. Though it will probably not ensure correct compilation just setting that...
But try adding /lib/modules/$(uname -r)/build/arch/arm/include to your include path.
Here's a simple guide for building your own kernel and modules for a Pi2 on your PC:
http://lostindetails.com/blog/post/Compiling-a-kernel-module-for-the-raspberry-pi-2
They use the makefile approach.
The following link will help you
Cross-compiling of kernel module for ARM architecture
This has an example of the make file approach also.
As a side note if you want to have an Idea about the importance the "asm/types.h" in Linux you can have a look here to see what all functions use this . http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html
For knowing more about your out (.o) file use the command "file"
"file outputfilename.o" If you are cross compiling the file correctly and you are using a 64 bit system as host and your target is 32 bit you can verify it here. Your compiled output will be 32bit in proper working case .
There are a couple of things to change in how you build an out-of-core kernel module.
First, use the kernel Makefile rather than invoking the compiler directly, in order to get all the necessary CFLAGS.
Second, specify CROSS_COMPILE=arm-eabi- because other binutils are needed in the build.
Run the following command from the directory containing your module source code and Makefile:
$ make CROSS_COMPILE=arm-eabi- -C <path_to_kernel_src> M=$PWD
The Makefile for a module consisting of a single source file would contain the following line:
obj-m := panel-xxxxxxx.o
The kernel kbuild Makefile rules would take care of generating a modinfo source file, and compiling and linking those into a .ko module binary.
See Documentation/kbuild/modules.txt for more details.
I am now learning c language, and my school put all assignments on myth, every time we have to log in by ssh and execute command remotely.
Thus I want to download the files and execute them on my own macbook. However when I use make command to compile the files, I got errors and warnings such as :
gcc -g -O0 -std=gnu99 -Wall $warnflags -m32 -c -I. vectest.c -o vectest.o
warning: unknown warning option '-Wlogical-op'; did you mean '-Wlong-long'?
vectest.c:10:10: fatal error: 'error.h' file not found
#include <error.h>”
I googled these problems but could not find a satisfactory answer. can anyone help me solve this ? or I have to use a linux machine instead?
Indeed; compilers for various platforms (even if it's the "same" compiler, such as GCC) may have different flags and behaviors. You may be able to get it to work - you could remove the -Wlogical-op flag from $warnflags in your Makefile, but if the error.h file is a system-supplied header file, you're probably in trouble. Therefore, I suggest that you download e.g. VirtualBox and run Linux on it.
See error(3) for what this header provides. It's not specific to linux but to the GNU C library. What you COULD do is provide your own minimal implementation of these functions and write your own error.h.
You could even `#define' them to do nothing at all, but then you will probably lose some error reporting in the existing code. Maybe you could try to find a teacher understanding the problem and discuss the issue ... it's probably better to learn standard C not using any platform-specific extensions.
I wanted to configure an autotooled project to invoke a non-standard
linker (the gold linker),
using the stock autotools of Linux Mint 16/Ubuntu 13.10
I believed I would achieve this by:
libtoolize-ing the project
Running ./configure LD=/path/to/my/linker ... etc.
However this has been ineffective. libtoolize has been successful. After
a standard ./configure; make I now see that libtool is doing the
linking:
/bin/bash ./libtool --tag=CXX --mode=link g++ -g -O2 -o helloworld helloworld.o
But passing LD=/path/to/my/linker to configure makes no difference. Experimentally,
I even ran:
./configure LD=/does/not/exist
expecting to provoke an error, but I didn't. The output contains:
checking if the linker (/does/not/exist -m elf_x86_64) is GNU ld... no
checking whether the g++ linker (/does/not/exist -m elf_x86_64) supports shared libraries... yes
And thereafter a make continues to link, successfully, invoking g++ exactly as before.
What is the right way to configure a non-standard linker?
But passing LD=/path/to/my/linker to configure makes no difference
This is because LD is almost never and should almost never be used to link any user-space program. Correct links are performed by using the appropriate compiler driver (gcc, g++, etc) instead.
What is the right way to configure a non-standard linker?
If you have /some/path/ld and you want gcc to use that ld, pass -B/some/path flag to gcc.
It then follows that you likely want:
./configure CC='gcc -B/some/path' CXX='g++ -B/some/path' ...
I landed on this via a Google search, though my scenario is a bit different from yours; there was no libtool involved. An old open source program's Makefile was hard-coding ld to create an object file with a symbol from binary data.
This is what I ended up doing to work around the lack of $(LD) being recognized when passed to configure:
https://github.com/turboencabulator/tuxnes/commit/bab2747b175ee7f2fc3d9afb28d69d82db054b5e
Basically I added to configure.ac:
AC_CHECK_TOOL([LD], [ld])
Leaving this answer here for if someone else lands via a google search.
I want to link an existing shared library (FlashRuntimeExtensions.so) to my C-code while compiling my own shared library. But whatever I try I always get the same error; that the file is in a wrong format. Does anybody have an idea on how to solve this?
Here is my compile command:
$ g++ -Wall ane.c FlashRuntimeExtensions.so -o aneObject
FlashRuntimeExtensions.so: could not read symbols: File in wrong format
collect2: ld gaf exit-status 1 terug
Your command line tries to generate x86 code and link it to ARM code using the native g++ available in your distribution.
This will not work. Use the Android NDK available here: http://developer.android.com/tools/sdk/ndk/index.html
The NDK includes a set of cross-toolchains (compilers, linkers, etc..) that can generate native ARM binaries on Linux, OS X, and Windows (with Cygwin) platforms.
In general .so will be linked using -l.
for example, pthread -lpthread we use.
gcc sample.c -o myoutput -lpthread
But as per #chill's statement, what you are doing in command is correct only.
I suggest you to refer the following link.
C++ Linker Error SDL Image - could not read symbols
It should be an architecture mismatch. I faced this problem once, I have solved it by building the libs in same target platform and it is obvious. If you are using linux or Unix like OS you can see that by file command and if you are using windows you can see that using Dependency Walker. You need to make sure that all the libs matches architecture.