My app uses library: boost, ffmpeg (avcodec, avformat,...). I want to pack my app to run on Ubuntu 14 or higher. I don't want to use static build.
I have checked on some computers which install Ubuntu. Here is results:
Ubuntu 14.04:
ls -l libavcodec*
libavcodec.a
libavcodec.so -> libavcodec.so.54.35.1
libavcodec.so.54.35.1
ls -l libboost_thread*
libboost_thread.a
libboost_thread.so -> libboost_thread.so.1.54.0
libboost_thread.so.1.54.0
Ubuntu 16.04:
ls -l libavcodec*
libavcodec.a -> libavcodec-ffmpeg.a
libavcodec-ffmpeg.a
libavcodec-ffmpeg.so -> libavcodec-ffmpeg.so.56.60.100
libavcodec-ffmpeg.so.56 -> libavcodec-ffmpeg.so.56.60.100
libavcodec-ffmpeg.so.56.60.100
libavcodec.so -> libavcodec-ffmpeg.so
ls -l libboost_thread*
libboost_thread.a
libboost_thread.so -> libboost_thread.so.1.58.0
libboost_thread.so.1.58.0
My app
objdump -p ControlStation | grep NEEDED
NEEDED libavcodec-ffmpeg.so.56
NEEDED libavformat-ffmpeg.so.56
NEEDED libavutil-ffmpeg.so.54
NEEDED libswscale-ffmpeg.so.3
NEEDED libboost_system.so.1.58.0
NEEDED libboost_thread.so.1.58.0
....
I compile on Ubuntu 16, so my app uses file libavcodec-ffmpeg.so.56 and libboost_thread.so.1.58.0. Therefore, it couldn't run on other computers.
Normally, a shared library should be
libName.so ---> libName.so.MAJOR
libName.so.MAJOR --> real library file
But ffmpeg has many versions, boost library doesn't have symbolic file libboost_thread.so.1
Could anyone help me about this problem ? Thank you !
Related
Hello I am trying to create an custom application where I can read and write to the files. I can use it using stdio.h library in C code. How can use this operation while building with yocto. I can create an executable, when the flash the image on my microcontroller built with my application it says there is no input files (Yocto dosnt do file handling)
Do i need to do kernel configuration? (If yes how should i do it)
inherit autotools pkgconfig
inherit meson
PRIORITY = "optional"
SECTION = "examples"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
SRC_URI = "file://appxyz/"
S = "${WORKDIR}"
B = "${WORKDIR}/build_dir"
MESON_SOURCEPATH = "${S}/appxyz"
do_install() {
install -d ${D}${bindir}
install -d ${D}${libdir}
install -m 0755 ${B}/appxyz ${D}${bindir}
}
FILES_${PN} += "${bindir}/ ${bindir}/appxyz"
This is recipe for my application, appxyz is the executable, how can change it to reading files?
I am trying to install PnetCDF from source. I used the 1.12.2 version for installation. MPICH, Zlib, Szip, HDF5 and NetCDF-4 have been installed from source successfully (HDF5 and NetCDF-4 have been built with parallel I/O support).
MPICH is installed as follows (v3.4.2):
./configure --prefix=${DIR}/mpich --with-device=ch4:ofi CFLAGS="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64" --enable-fortran
make
make check
make install
export PATH=${DIR}/mpich/bin:${PATH}
Where DIR is the root directory all the libraries are installed to (with their own directory).
Zlib (v1.2.11):
./configure --prefix=${DIR}/grib2
make
make check
make install
export CPPFLAGS="-I${DIR}/grib2/include"
export LDFLAGS="-L${DIR}/grib2/lib"
export LD_LIBRARY_PATH="${DIR}/grib2/lib:$LD_LIBRARY_PATH"
Szip (v2.1.1):
./configure --prefix=${DIR}/szip
CC=mpicc
make
make check
make install
export LD_LIBRARY_PATH="${DIR}/szip/lib:$LD_LIBRARY_PATH"
export CPPFLAGS="-I${DIR}/szip/include ${CPPFLAGS}"
export LDFLAGS="-L${DIR}/szip/lib ${LDFLAGS}"
HDF5 (v1.8.22):
export HDF5=${DIR}/hdf5
./configure --prefix=${HDF5} --with-szlib=${DIR}/szip --with-zlib=${DIR}/grib2 --enable-parallel CC=mpicc --enable-fortran FC=mpifort
make
make check
make install
export LD_LIBRARY_PATH="${HDF5}/lib:$LD_LIBRARY_PATH"
export CPPFLAGS="-I${HDF5}/include ${CPPFLAGS}"
export LDFLAGS="-L${HDF5}/lib ${LDFLAGS}"
export PATH="${HDF5}/bin:${PATH}"
NetCDF4-C (v4.8.0):
./configure --prefix=${DIR}/netcdf --disable-dap --enable-parallel-tests --enable-large-file-tests --disable-extreme-numbers CC=mpicc
make
make check
make install
export PATH="${NETCDF}/bin:${PATH}"
export LD_LIBRARY_PATH="${NETCDF}/lib:${LD_LIBRARY_PATH}"
export CPPFLAGS="-I${NETCDF}/include ${CPPFLAGS}"
export LDFLAGS="-L${NETCDF}/lib ${LDFLAGS}"
export LIBS="$(nc-config --libs) ${LIBS}"
NetCDF4-Fortran (v4.5.3):
./configure --prefix=${NETCDF} --enable-large-file-tests --enable-parallel-tests CC=${DIR}/mpich/bin/mpicc FC=mpifort F77=mpif77
make
make check
make install
The HDF5 configuration shown by h5pcc -showconfig is given below (only the part I think relevant is included):
Features:
---------
Parallel HDF5: yes
High-level library: yes
Build HDF5 Tests: yes
Build HDF5 Tools: yes
Threadsafety: no
Default API mapping: v18
With deprecated public symbols: yes
I/O filters (external): deflate(zlib),szip(encoder)
MPE:
Direct VFD: no
(Read-Only) S3 VFD: no
(Read-Only) HDFS VFD: no
dmalloc: no
Clear file buffers before write: yes
Using memory checker: no
Function stack tracing: no
Strict file format checks: no
Optimization instrumentation: no
And the output of nc-config --all:
--cc -> /media/case/work/hwrf/libraries/mpich/bin/mpicc
--cflags -> -I/media/case/work/hwrf/libraries/netcdf/include -I/media/case/work/hwrf/libraries/hdf5/include -I/media/case/work/hwrf/libraries/szip/include -I/media/case/work/hwrf/libraries/grib2/include
--libs -> -L/media/case/work/hwrf/libraries/netcdf/lib -lnetcdf
--static -> -lhdf5_hl -lhdf5 -lm -ldl -lz -lcurl
--has-c++ -> no
--cxx ->
--has-c++4 -> no
--cxx4 ->
--has-fortran -> yes
--fc -> mpifort
--fflags -> -I/media/case/work/hwrf/libraries/netcdf/include -I/media/case/work/hwrf/libraries/netcdf/include
--flibs -> -L/media/case/work/hwrf/libraries/netcdf/lib -lnetcdff
--has-f90 ->
--has-f03 -> yes
--has-dap -> no
--has-dap2 -> no
--has-dap4 -> no
--has-nc2 -> yes
--has-nc4 -> yes
--has-hdf5 -> yes
--has-hdf4 -> no
--has-logging -> no
--has-pnetcdf -> no
--has-szlib -> yes
--has-cdf5 -> yes
--has-parallel4 -> yes
--has-parallel -> yes
--has-nczarr -> no
--prefix -> /media/case/work/hwrf/libraries/netcdf
--includedir -> /media/case/work/hwrf/libraries/netcdf/include
--libdir -> /media/case/work/hwrf/libraries/netcdf/lib
--version -> netCDF 4.8.0
So far, so good. Everything was installed without any failure. All make checks have been passed.
The trouble arises when I try to compile PnetCDF from source. I tried PnetCDF version 1.11.2 for installation (in fact, I triedd several older versions as well, more on that later). The installation is attempted as following:
./configure --prefix=${DIR}/pnetcdf --enable-large-single-req --enable-netcdf4 --enable-shared=yes --with-netcdf4="${DIR}/netcdf" MPICC=mpicc MPIF77=mpif77 MPIF90=mpif90 MPICC=mpicc MPIF77=mpif77 MPIF90=mpif90
make
make tests
make check
make ptest
make ptests
But, it fails at make ptest. The failed test is tst_rec_vars.c and the error message is:
Error at line 78 of tst_rec_vars.c: expect dim X len 4 but got 3
Error at line 78 of tst_rec_vars.c: expect dim X len 4 but got 3 fail with 2 mismatches
make[2]: *** [Makefile:1354: ptest4] Error 1
make[2]: Leaving directory '/media/case/work/hwrf/libraries/source/pnetcdf-1.12.2/test/nc4'
make[1]: *** [Makefile:780: ptests] Error 1
make[1]: Leaving directory '/media/case/work/hwrf/libraries/source/pnetcdf-1.12.2/test'
make: *** [Makefile:965: ptests] Error 1
Heres the test that is failing (GitHub link): tst_rec_vars.c
It seems that the test is checking a NetCDF file for the lenth of dimension in it. I assume the NetCDF file it is checking is tst_rec_vars.nc. The output of ncdump -h on this file is given below:
netcdf tst_rec_vars {
dimensions:
X = UNLIMITED ; // (4 currently)
variables:
int U(X) ;
int V(X) ;
}
That is, the NetCDF file's X dimension indeed has a length 4, although the test program beleives it is only 3. But, if I were to run the test manually as ./tst_rec_vars, it runs correctly.
*** TESTING C tst_rec_vars for record variables to NetCDF4 file ------ pass
Could it be due to parallel processing/threading? The test in question is run under a section titled test/nc4: Parallel testing on 4 MPI processes. I assume the NetCDF file was tested before it was formed completely, which could explain why I got a length 1 X dimension in the NetCDF file from ncdump at one point.
>>>ncdump -h tst_rec_vars.nc
netcdf tst_rec_vars {
dimensions:
X = UNLIMITED ; // (1 currently)
variables:
int U(X) ;
int V(X) ;
}
What exactly is going on here? How do I fix it?
Edit:
I'm trying to install it on CentOS 7, since I was encountering the above mentioned issue, I tried to install it on a Debian 11 system. But I am getting the very exact error in both systems.
The goal is to compile the latest stable GSL to web assembly and make it available as a Node.js module.
I tried the following procedure inspired by this section of the emscripten manual:
git clone git://git.savannah.gnu.org/gsl.git
cd gsl
git checkout tags/release-2-6
autoreconf -i
emconfigure ./configure
emmake make
Unfortunately, I get multiple wasm-ld: error: duplicate symbol.
But compiling the GSL (make) works perfectly fine.
I am using emsdk version 2.0.16 on Ubuntu 18.04.
Does anyone know, how to fix this problem?
Help will be much appreciated.
Finally found a solution inspired by this gist. The problem had to do with dynamic linking and shared libraries.
Anyway, the following code successfully compiles the GSL to a Node.js module:
git clone git://git.savannah.gnu.org/gsl.git
cd gsl
git checkout tags/release-2-6
autoreconf -i
emconfigure ./configure
# Note the flag indicating STATIC linking:
# -------------------------======---------
emmake make LDFLAGS=-all-static
emcc -g -O2 -o .libs/gsl.js -s MODULARIZE -s EXPORTED_RUNTIME_METHODS=\[ccall\] -s LINKABLE=1 -s EXPORT_ALL=1 ./.libs/libgsl.a -lm
The above creates the node module ./.libs/gsl.js which can be used as shown in the following example script:
// test_gsl.js
var factory = require('./.libs/gsl.js');
factory().then((instance) => {
// Compute the value of the Bessel function for x = 5.0:
var besselRes = instance._gsl_sf_bessel_J0(5.0);
// Calculate the hypergeometric cumulative probability distribution for:
// 4: Number of successes (white balls among the taken)
// 7: Number of white balls in the urn
// 19: Number of black balls in the urn
// 13: Number of balls taken
var hg = instance._gsl_cdf_hypergeometric_P(4, 7, 19, 13);
// Do the same using `ccall`
var hg_ccal = instance.ccall("gsl_cdf_hypergeometric_P",
"number",
["number", "number", "number", "number" ],
[4, 7, 19, 13]);
console.log(`besselRes is: ${besselRes}`);
console.log(`gsl_cdf_hypergeometric_P(4,7,19,13): ${hg}`);
console.log(`ccall("gsl_cdf_hypergeometric_P",4,7,19,13): ${hg_ccal}`);
});
Please note that all GSL functions are prefixed with an underscore (_) in the generated Node.js module.
The above script (node ./test_gsl.js) generates this output:
besselRes is: -0.17759677131433826
gsl_cdf_hypergeometric_P(4,7,19,13): 0.8108695652173905
ccall("gsl_cdf_hypergeometric_P",4,7,19,13): 0.8108695652173905
I sincerely hope, this post helps any one.
Have a good one and Cheers!
I'm trying to compile the same lib on two x86 separate machines.
Both use the same toolchain (exactly same set of files) but have different Glibc versions.
When I run command LD_DEBUG=libs /lib64/ld-linux-x86-64.so.2 --list ./libl2ps.so I notice the following discrepancy between the 2 Linux loaders:
Machine 1 (with Glibc 2.12):
19943: find library=libm.so.6 [0]; searching
19943: search path=/ebs/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64:...:/ebs/frperies/repo/gnb/uplane/build_bbp/l2_ps/build/. (RPATH from file ./libl2ps.so)
19943: trying file=/ebs/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm.so.6
19943:
19943: find library=libgcc_s.so.1 [0]; searching
...
In this case the Linux loader selects lib libm.so.6 from the toolchain path based on RPATH of lib libl2ps.so.
Machine 2 (with Glibc 2.17):
10699: find library=libm.so.6 [0]; searching
10699: search path=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64:/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/lib64:/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib:/home/frperies/repo/gnb/uplane/build_bbp/l2_ps/build/. (RPATH from file ./libl2ps.so)
10699: trying file=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm.so.6
10699: trying file=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/lib64/libm.so.6
10699: trying file=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib/libm.so.6
10699: trying file=/home/frperies/repo/gnb/uplane/build_bbp/l2_ps/build/./libm.so.6
10699: search cache=/etc/ld.so.cache
10699: trying file=/lib64/libm.so.6
As for Machine 1, the loader attempts from RPATH of libl2ps.so to select lib libm.so.6 from toolchain path but skip it for some reason and try further other paths. Finally it selects libm.so.6from the system path /lib64/.
The RPATH of the 2 libs lib2ps.so are exactly the same. The two files libm.so.6 are also exactly the same on both machines (checked with md5sum).
I don't understand this differences of behavior between the 2 Linux loaders.
Do you see any reason what would explain this discrepancy ?
Thank you very much for your answers.
Update:
Thank you yugr for your answer.
Output of readelf -h gives only differences on fields "Entry point address" and "Start of section headers" and there is no other differences so I think it will not help.
Regarding using dlopen()/dlerror(), I've done a little executable with the following statement:
dlopen("/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so", RTLD_LAZY);
On machine 1 it works as expected:
C++ dlopen demo
Opening libm-2.28.so...
Closing library...
On machine 2 it fails and dlerror() gives the following output:
Cannot open library: /home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so: cannot open shared object file: No such file or directory
but the file libm-2-28.so really exists on my file system:
$ ls -l /home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so
-rwxr-xr-x 1 frperies linseeusers_lte_espoo 1682944 Oct 5 13:50 /home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic- linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so
This is very weird, what could lead to this situation ???
Thanks
Update 2:
That is true that I haven't pointed out that machine 1 is a RHEL6.8 distro while machine 2 is RHEL7.4 distro. I (naively?) didn't think this really matters...
On machine 1:
$ cat /proc/sys/kernel/osrelease
4.4.115-1.NSN.el6.x86_64
$ uname -a
Linux sq24-3 4.4.115-1.NSN.el6.x86_64 #1 SMP Mon Feb 12 12:35:46 CET 2018 x86_64 x86_64 x86_64 GNU/Linux
$ readelf -n libl2ps.so
Notes at offset 0x00000270 with length 0x00000024:
Owner Data size Description
GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
Build ID: b598468830fdf2f61eda25553b9a367c4d28cdc9
On machine 2:
$ cat /proc/sys/kernel/osrelease
3.10.0-693.el7.x86_64
$ uname -a
Linux localhost.localdomain 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
$ readelf -n libl2ps.so
Displaying notes found at file offset 0x00000270 with length 0x00000024:
Owner Data size Description
GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
Build ID: 5829181bc0502233748149369108915ea7b10e8f
Does it help ?
Thanks
Update 3:
$ readelf -n libm.so.6
Notes at offset 0x00000238 with length 0x00000024:
Owner Data size Description
GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
Build ID: 0d84c7247dd76008c096719043e5592735a1c4bd
Notes at offset 0x0000025c with length 0x00000020:
Owner Data size Description
GNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag)
OS: Linux, ABI: 4.4.0
So, how to interpret this ABI version number set to 4.4.0 ??
Thanks
Thank you yugr and Employed Russian for your answers!!
I will give it a try by upgrading my Kernel version on Machine 2.
Thanks
Regards
The error message that you see is the infamously confusing ENOENT errno. I see two instances of it in dl-load.c:
checking OS compatibility
loading non-setuid to setuid process
I suspect the first one fails in your case which would mean that OS kernel is incompatible between two machines. ld.so manpage indeed says that
Each shared object can inform the dynamic linker of the
minimum kernel ABI version that it requires. (This
requirement is encoded in an ELF note section that is viewable
via readelf -n as a section labeled NT_GNU_ABI_TAG.) At run
time, the dynamic linker determines the ABI version of the
running kernel and will reject loading shared objects that
specify minimum ABI versions that exceed that ABI version.
NT_GNU_ABI_TAG is 4.4.0 which means that you run a program expecting a minimum 4.4 kernel on a 3.10 kernel. Theoretically newer Glibc should run on older kernels as well but in your case Glibc was probly built with explicit --enable-kernel flag which prevents it's usage on kernels before 4.4 (see e.g. this explanation of --enable-kernel).
As a workaround, you may try to fool Glibc by overriding kernel version on machine 2 via
export LD_ASSUME_KERNEL=4.4.0
but it may not work if libm makes 4.4-specific syscalls that are not really present on 3.10.
How to find the current version of Yocto kernel that I am using to build the components. There is a version for poky. But i want to know the Yocto kernel version.
How to find out Yocto version?
Check out this file it gives you full details about Yocto version
vim $POKY-DIR/meta-poky/conf/distro/poky.conf
You will get info like:
DISTRO = "poky"
DISTRO_NAME = "Poky (Yocto Project Reference Distro)"
DISTRO_VERSION = "2.7.2"
DISTRO_CODENAME = "warrior"
SDK_VENDOR = "-pokysdk"
SDK_VERSION = "${#d.getVar('DISTRO_VERSION').replace('snapshot-${DATE}', 'snapshot')}"
.....
.....
Now you will know the version you are actually using.
To find the kernel version you are using:
bitbake -e virtual/kernel | grep "^PV"
To know the yocto kernel version you are using, just type
bitbake -e virtual/kernel | grep "^PV"
And to know the kernel you are using, type bitbake -e virtual/kernel | grep "^PN"
And if you have any kernel you want to know the version, type bitbake -e <kernel_name> | grep "^PV"
I hope this will be helpfull
An easier way to get this together with some other yocto related information without launching bitbake is doing the following
cat $POKY-DIR/documentation/poky.ent | grep "DISTRO_REL_TAG"
To know DISTRO info
cat $POKY-DIR/documentation/poky.ent | grep "DISTRO*"
To know POKY info
cat $POKY-DIR/documentation/poky.ent | grep "POKY*"
To know YOCTO info
cat $POKY-DIR/documentation/poky.ent | grep "YOCTO*"
cd poky
grep "DISTRO_" meta-poky/conf/distro/poky.conf