How to check libc version? - c

This question is related to Why does pclose return prematurely?. I'd like to find out what version of libc is used for a cross-compiled executable. There are limitations, described below, that make the answers at Check glibc version for a particular gcc compiler not apply.
One proposed way to check the libc version is to use the gnu_get_libc_version() function declared in gnu/libc-version.h. My cross-toolchain does not include libc-version.h.
Another proposed solution is to use the -print-file-name gcc option. This answer in the linked question just flat-out didn't work for me:
$ /path/to/toolchains/ARM-cortex-m3-4.4/bin/arm-uclinuxeabi-gcc -print-file-name=libc.so
libc.so
$
$ /path/to/toolchains/ARM-cortex-m3-4.4/bin/arm-uclinuxeabi-gcc -print-file-name=foo.bar
foo.bar
$ # I really do not have a foo.bar file in existence
Another proposed solution is to just do ldd --version. My target platform doesn't have ldd:
$ ldd
sh: can't execute 'ldd': No such file or directory
Another proposed solution is to look at __GLIBC__ and __GLIBC_MINOR__ -- but these also appear to come from libc-version.h, which doesn't exist in my cross-toolchain, as described above.
My cross-toolchain seems to only provide libc.a, not libc.so.
I tried running that libc.a through /path/to/toolchains/ARM-cortex-m3-4.4/bin/arm-uclinuxeabi-nm and strings grepping (case-insensitive) for "version" and "libc" but did not find anything that looked like an identifying version.
The last thing I tried was strings /path/to/toolchains/ARM-cortex-m3-4.4/bin/arm-uclinuxeabi-gcc | grep GLIBC, which gave me:
GLIBC_2.3
GLIBC_2.2
GLIBC_2.1
GLIBC_2.0
EGLIBC configuration specifier, serves multilib purposes.
But that solution wasn't highly upvoted, and it also has a comment suggesting that it doesn't really give you the version. I don't really understand this answer or its responding comment, so I don't know what to make of its validity.
Question: given all the above, is there any definitive way to determine the libc version used for cross-compiling for this cross-platform?

You might be dealing with a variant of libc other than glibc. There are multiple different implementations of libc, such as musl or uclibc.
Here's a Bash script which can detect whether your compiler is using glibc or uclibc, and tells you the version if it detects either.
GCC_FEATURES=$(gcc -dM -E - <<< "#include <features.h>")
if grep -q __UCLIBC__ <<< "${GCC_FEATURES}"; then
echo "uClibc"
grep "#define __UCLIBC_MAJOR__" <<< "${GCC_FEATURES}"
grep "#define __UCLIBC_MINOR__" <<< "${GCC_FEATURES}"
grep "#define __UCLIBC_SUBLEVEL__" <<< "${GCC_FEATURES}"
elif grep -q __GLIBC__ <<< "${GCC_FEATURES}"; then
echo "glibc"
grep "#define __GLIBC__" <<< "${GCC_FEATURES}"
grep "#define __GLIBC_MINOR__" <<< "${GCC_FEATURES}"
else
echo "something else"
fi
(Source.)
If you're using musl, unfortunately this script will report "something else." There's no way to detect musl with a preprocessor macro, and this is intentional.

Related

How can I generate a Syntastic config file using premake?

Syntastic is a source code linter plugin for the Vim editor.
It does various syntax and heuristic checks, using external tools. In the case of C and C++ code, this frequently involves running a compiler on the code.
In order to invoke the compiler, Syntastic reads a config file that contains command-line arguments that should be used to invoke the compiler.
Obviously, the "real" compilation in the project is handled by premake but this means there are potentially two sources of truth -- the compiler flags written into the Syntastic config file, and the compiler flags written by premake into the build scripts.
I'd like to resolve this by having premake generate the compiler flags in the Syntastic config file, also.
This seems like a fairly straightforward task with various possible approaches -- generate a fake compiler, invent a pre-build task, etc. But I don't know enough about the innards of premake to know which of these approaches is the right one.
How can I get premake to generate my syntastic.config file?
I looked into a few different kinds of projects under premake5 -- there have been several added to the documentation recently. None of them quite got me there.
The problem is that what I want to do is either "always create the file" or "create the file when the premake5.lua file changes." That makes things difficult, but not totally impossible.
However, the generated file is not an input file for any of the projects, so that apparently pushes it over the edge from difficult to "not possible ATM".
I tried doing what premake calls Custom Build Commands, but this needs the generated file to be a piece of source code it can use to build with. If this isn't the case, apparently the dependency graph breaks and the makefile is generated without a dependency on the file. :(
In order to just get this working, I opted to just regenerate the file all the time using a prebuild command:
prebuildcommands {
"$(SILENT) "
..repo_dir.."etc/write-syntastic-config.sh $(ALL_CFLAGS)"
.." > "
..repo_dir.."etc/syntastic.config"
}
The key value in this is $(ALL_CFLAGS), the variable used by the Makefile generator to hold the stuff I want.
Syntastic's config file and its use of the config file is a bit touchy. It doesn't accept -L and -l options, or perhaps gcc doesn't accept them in whatever mode it is invoked with. Regardless, some filtering is required the args can be used.
Also, syntastic processes the lines looking specifically for -I, which it then handles in a special way: the include paths are treated as either absolute, or relative to the config file.
Finally, syntastic doesn't appear to know about -isystem, another include-directory option for gcc. So that has to be handles differently.
If anyone cares, here's the script I'm using:
etc/write-syntastic-config.sh
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
PENDING=""
PENDING_DIR=false
printf '# Generated by %s\n' "$(basename "$0")"
CFG_FILE_DIR="$(dirname "$0")"
for ARG in "$#"
do
#printf >&2 "arg=>%s<\\n" "$ARG"
case "$ARG" in
-I| \
-L)
PENDING="$ARG"
PENDING_DIR=true
;;
-isystem)
# -isystem is like -I, except syntastic doesn't know it
printf '%s\n' "$ARG"
PENDING_DIR=true
;;
-I*| \
-L*) PENDING_DIR=true
PENDING="${ARG:0:2}"
ARG="${ARG/#-?/}"
;;& #<-- Resume matching cases (i.e, "goto next case")
*) if $PENDING_DIR
then
ARG="$(realpath -m \
--relative-to "$CFG_FILE_DIR" \
"$ARG")"
fi
case "$PENDING" in
-L) : ignore it ;;
"") printf '%s\n' "$ARG" ;;
*) printf '%s %s\n' "$PENDING" "$ARG" ;;
esac
PENDING=""
PENDING_DIR=false
;;
esac
done
exit 0

How to trace specific functions/files in C?

I have already known that the GCC's argument -finstrument-functions can hook the functions and the argument -finstrument-functions-exclude-file(functions)-list can exclude some files/functions to be traced.
But now I have a lot of files to be compiled and only some of them need to be traced. I wonder if I can include some specific functions/files to be traced, such as something like -finstrument-functions-include-file(functions)-list?
Thanks a lot!
GCC does not support this out-of-the-box (it's more a task for your build system). One common hack to achieve what you want is to write a shell wrapper which replaces GCC and adds flags where needed:
$ cat path/to/fake/gcc
#!/bin/sh
FLAGS=
if echo "$*" | grep -q 'myfile1.c'; then
FLAGS=-finstrument-functions
fi
exec /usr/bin/gcc "$#" $FLAGS
$ export PATH="path/to/fake:$PATH"
If you use cmake to build your project you may benefit from adding COMPILE_OPTIONS at a specific level. Use
add_compile_options()
for directory-wide settings
target_compile_options()
for target-specific settings and
set_source_files_properties()
for file specific settings.
In your case
set_source_files_properties(
myfile1.cc PROPERTIES COMPILE_FLAGS -finstrument-functions)
Recent GCC compilers can be extended by GCC plugins.
But now I have a lot of files to be compiled and only some of them need to be traced.
You should consider writing your own GCC plugin to do that job. See also this draft report.
You may configure your build automation tool (e.g. GNU make or ninja) to help you.
At last, some of your C code (e.g. #include-ed files) could be generated. Think of meta-programming approaches (e.g. with SWIG or ANTLR or Bison or GPP or your own C code generator), perhaps using X-macros.

Finding path of Builtins and executables for commands in Linux

I am trying to implement 'whereis' command in C. But I was able to implement it partially. When I ever I try 'whereis' in Linux shell, lets say for e.g. whereis ls .. I get the following results
$ whereis ls
/bin/ls
/usr/share/man/man1p/ls.1p.gz
/usr/share/man/man1/ls.1.gz
I am able to get the first path using the PATH env.variable. But I have no clue how to find the other two paths. Any pointers how to find those paths.
On Linux (but not on all systems, e.g. Mac OS), whereis searches in $MANPATH (or some other default places) for matching files, which for ls are something like this:
$MANPATH/man(.+)/ls\.\1(\.gz)?
If you really need to know how whereis works, you can simply look at its source....
man whereis (Ubuntu 11.04) mentions the following paths:
/{bin,sbin,etc}
/usr/{lib,bin,old,new,local,games,include,etc,src,man,sbin,X386,TeX,g++-include}
/usr/local/{X386,TeX,X11,include,lib,man,etc,bin,games,emacs}
Another option generally available is which. It will return the fully-qualified path and executable name for the executable. For example:
$ which ls
/usr/bin/ls
It may help you in your whereis endevour and is also useful for portability in scripts to set the executable where it may be located in different places on different distributions:
my_ls=$(which ls 2>/dev/null)
[ -x "$my_ls" ] || {
echo "ls not found"
exit 1
}

How can I find the header files of the C programming language in Linux?

When I write C programs in Linux, and then compile them using gcc, I am always curious about where those header files are. For example, where stdio.h is. More generally, where is stdbool.h?
What I want to know is not only where it is, but also how to get those places, for example, using shell command or using the C programming language.
gcc -H ... will print the full path of every include file as a side-effect of regular compilation. Use -fsyntax-only in addition to get it not to create any output (it will still tell you if your program has errors). Example (Linux, gcc-4.7):
$ cat > test.c
#include <stdbool.h>
#include <stdio.h>
^D
$ gcc -H -fsyntax-only test.c
. /usr/lib/gcc/x86_64-linux-gnu/4.7/include/stdbool.h
. /usr/include/stdio.h
.. /usr/include/features.h
... /usr/include/x86_64-linux-gnu/bits/predefs.h
... /usr/include/x86_64-linux-gnu/sys/cdefs.h
.... /usr/include/x86_64-linux-gnu/bits/wordsize.h
... /usr/include/x86_64-linux-gnu/gnu/stubs.h
.... /usr/include/x86_64-linux-gnu/bits/wordsize.h
.... /usr/include/x86_64-linux-gnu/gnu/stubs-64.h
.. /usr/lib/gcc/x86_64-linux-gnu/4.7/include/stddef.h
.. /usr/include/x86_64-linux-gnu/bits/types.h
... /usr/include/x86_64-linux-gnu/bits/wordsize.h
... /usr/include/x86_64-linux-gnu/bits/typesizes.h
.. /usr/include/libio.h
... /usr/include/_G_config.h
.... /usr/lib/gcc/x86_64-linux-gnu/4.7/include/stddef.h
.... /usr/include/wchar.h
... /usr/lib/gcc/x86_64-linux-gnu/4.7/include/stdarg.h
.. /usr/include/x86_64-linux-gnu/bits/stdio_lim.h
.. /usr/include/x86_64-linux-gnu/bits/sys_errlist.h
The dots at the beginning of each line count how deeply nested the #include is.
If you use gcc, you can check a specific file with something like:
echo '#include <stdbool.h>' | cpp -H -o /dev/null 2>&1 | head -n1
-H asks the preprocessor to print all included files recursively. head -n1 takes just the first line of output from that, to ignore any files included by the named header (though stdbool.h in particular probably doesn't).
On my computer, for example, the above outputs:
. /usr/lib/gcc/x86_64-linux-gnu/4.6/include/stdbool.h
locate stdio.h
or
mlocate stdio.h
but locate relies on a database, if you have never updated it
sudo updatedb
you can also enquire gcc to know what are the default directories that are scanned by gcc itself:
gcc -print-search-dirs
During the preprocessing all preprocessor directives will be replaced with the actuals. Like macro expansion, code comment removal, including the header file source code etc...
we can check it by using the cpp - C PreProcessor command.
For example in the command line:
cpp Filename.c
displays the preprocessed output.
One approach, if you know the name of the include file, would be to use find:
cd /
find . -name "stdio.h"
find . -name "std*.h"
That'll take a while as it goes through every directory.
Use gcc -v and you can check the include path.
Usually, the include files are in /usr/include or /usr/local/include depending on the library installation.
Most standard headers are stored in /usr/include. It looks like stdbool.h is stored somewhere else, and depends on which compiler you are using. For example, g++ stores it in /usr/include/c++/4.7.2/tr1/stdbool.h whereas clang stores it at /usr/lib/clang/3.1/include/stdbool.h.
I think the generic path is:
/usr/lib/gcc/$(ls /usr/lib/gcc/)/$(gcc -v 2>&1 | tail -1 | awk '{print $3}')/include/stdbool.h
When I was looking for (on Fedora 25) I used "whereis stdio.h" For me, It was in /usr/include/stdio.h, /usr/shar/man/man3/stdio,3.gx. But when you are looking for the file, use whereis or locate
Use vim to open your source file and put the curses on stdio.h and in normal mode, command 'gf' will let vim open the stdio.h file for you.
'Ctr + g' will let vim display the absolute path of stdio.h

Determine if C library is installed on Unix

As a follow up question to my last one, is there any simple way to tell if a given C library is installed on a given machine (not programattically, just as a once off sort of thing)? I need to use libuuid, but I'm not sure if it's installed on the machines in question. The only two ways I can think of are:
1) Try to compile the code there (more work than I'd like to do)
2) Try something like "man libuuid" although seems like this wouldn't always be reliable if for some reason the manpages didn't get installed.
Is there some better other way?
Have you considered using autoconf? It's designed to check and see whether the build environment is set up correctly.
The simplest way is to invoke ld with the -l option. This will effectively test the existence of the library, searching the standard library locations automatically:
$ ld -luuid
ld: warning: cannot find entry symbol _start; not setting start address
$ echo $?
0
$ ld -luuidblah
ld: cannot find -luuidblah
$ echo $?
1
# so...
$ ld -luuid 2>/dev/null && echo "libuuid exists" || echo "libuuid not found"
EDIT
As dreamlax pointed out, this does not work for all unix variants. I don't know if it will work on all unixes (I've tested linux and OpenBSD) but you can try this instead:
$ echo "int main(){}" | gcc -o /dev/null -x c - -luuid 2>/dev/null
$ echo $?
0
$ echo "int main(){}" | gcc -o /dev/null -x c - -luuidblah 2>/dev/null
$ echo $?
1
Here's what I did using autoconf, which I'm showing here as a solid example for whoever else might come by next:
I created the file configure.ac which contained the following:
AC_INIT(package, 1.1, email)
AC_CHECK_LIB(uuid, uuid_generate_random, [echo "libuuid exists"], [echo "libuuid missing"])
I then ran the following commands in order (same folder I made configure.ac):
autoconf
./configure
At the end of the configure, it spat back whether or not it had found uuid_generate_random in the uuid library. Seemed to work perfectly (although unfortunately, two of the OSes were missing the library, but that's a whole other problem).
For anybody who may find this after the fact, the AC_INIT arguments here are throwaways and you can copy them wholesale. The arguments to AC_CHECK_LIB are: library name, the name of a function in that library, what to do on success, what to do on failure.
Even though Mehrdad's answer wasn't quite as helpful as I would have liked (i.e. to not have spent time trolling the docs) it seems to be the correct one and I'll be accepting it. mhawke: I really liked your answer, but I wasn't quite sure how to test to make sure it worked. It seemed to on SunOS, but always said no on the other two (AIX, HPUX) and I couldn't seem to come up with a library off the top of my head I could guarantee it would find.
Thanks for the help guys.
The autotools, as mentioned, checks for symbols within libraries. The way it does it is somewhat simple. As you mention in 1), autoconf, and its result the configure script, basically creates a dummy c program and attempts to link with the library in question. If it works, the library works, if it fails, obviously it wont work. Autoconf looks for specific symbols/function names in the library though.

Resources