gcc for ARM refuses to include Newlib standard directories - c

I am trying to cross-compile with arm-none-eabi-gcc-9.2.x and had the following problem:
undefined symbol 'PRIu64'
(message shortened to necessary minimum by me) which was caused by the Newlib header inttypes.h doing a:
#include <stdint.h>
which motivated gcc to include its onboard stdint.h from
/usr/lib/gcc/arm-none-eabi/9.2.1/include
instead of the Newlib one in
/usr/include/newlib
thereby breaking the compilation with the above error.
Of course I first tried to prefix the include path search with the usual
arm-none-eabi-gcc-9.2.1 -I/usr/include/newlib ...
but to my big surprise gcc spew it back at me (via -xc -E -v) with:
ignoring duplicate directory "/usr/include/newlib"
as it is a non-system directory that duplicates a system directory
Only a
arm-none-eabi-gcc-9.2.1 -isystem /usr/include/newlib ...
convinced it to include the Newlib directory in its search.
Is this due to a broken installation? And how dare gcc to not include a path I am supplying?
Do the ARM people ship their gcc with both, Newlib and a set of vanilla gcc system headers or where did this misconfiguration come from?

Indeed, newlib provides <stdint.h> while gcc also provides it. So, when <inttypes.h> includes <stdint.h> it does not include <stdint.h> from newlib but the one from gcc. It wouldn't be a big deal if <stdint.h> wouldn't define some macro used internally by <inttypes.h>.
The best thing to do is to fix newlib, change your compiler or patch your system headers.
If it is not possible you can include <sys/types.h> before <inttypes.h>. <sys/types.h> includes <_stdint.h> that define the necessary macros.
It seems the problem is specific to arm-none-gcc provided by Debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=953844

Related

checking whether library exist via preprocessor

There are two libraries zconf.h and unistd.h which are used to at least to get pid of the process. I generally test my code on Mac OSX and Ubuntu 18.04 in which they use zconf.h preferably(compiler offers zconf.h in lieu of unistd.h) if I forget to add, then if the code works, it's ok. However, in some prior day I needed to test the code in another machine AFAIR it has Ubuntu 10 or 12. Its compiler complained that there is no zconf.h. I wonder whether there is a way to check a machine has zconf.h, if not, use unistd.h. Can it be done using preprocessors like,
#ifdef ITS_IF_CONDITION
#include <zconf.h>
#else
#include <unistd.h>
Newer versions of GCC, clang and MSVC compilers implement the __has_include feature. Although it's a C++ 17 feature, I believe all three support it in plain C too.
But the traditional (and probably more portable) way is to check the existence of include files in a config script before the build process. Both autoconf and cmake have ways to achieve this.
#ifdef __has_include
#if __has_include(<zconf.h>)
#include <zconf.h>
#else
#include <unistd.h>
#endif
#else
#include <unistd.h>
#endif

Why is stddef.h not in /usr/include?

I have compiled the gnu standard library and installed it in $GLIBC_INST.
Now, I try to compile a very simple programm (using only one #include : #include <stdio.h>):
gcc --nostdinc -I$GLIBC_INST/include foo.c
The compilation (preprocessor?) tells me, that it doesn't find stddef.h.
And indeed, there is none in $GLIBC_INST/include (nor is there one in /usr/include). However, I found a stddef.h in /usr/lib/gcc/x86_64-unknown-linux-gnu/5.3.0/include.
Why is that file not under /usr/include? I thought it belonged to the standard c library and should be installed in $GLIBC_INST/include.
How can I compile my foo.c with the newly installed standard library when it doesn't seem to come with a stddef.h?
Edit: Clarification
I feel that the title of this question is not optimal. As has been pointed out by some answers, there is not a requirement for stddef.h to be in /usr/include (or $GLIBC_INST/include, for that matter). I do understand that.
But I am wondering how I can proceed when I want to use $GLIBC_INST. It seems obvious to me (although I might be wrong here) that I need to invoke gcc with --nostdinc in order to not use the system installed header files.
This entails that I use -I$GLIB_INST/include. This is clear to me.
Yet, what remains unclear to me is: when I also add -I/usr/lib/gcc/x86..../include, how can I be sure that I do have in fact the newest header files for the freshly compiled glibc?
That's because files under /usr/include are common headers that provided by the C library, for example, glibc, while the files at /usr/lib/gcc are specific for that particular compiler. It is common that each compiler has their own different implementation of stddef.h, but they will use the same stdio.h when links to the installed C library.
When you say #include <stddef.h> it does not require that /usr/include/stddef.h exists as a file on disk at all. All that is required of an implementation is that #include <stddef.h> works, and that it gives you the features that header is meant to give you.
In your case, the implementation put some of its files in another search path. That's pretty typical.
Why is that file not under /usr/include?
Because there's absolutely no requirement for standard headers to be located at /usr/include/.
The implementation could place them anywhere. The only guarantee is
that when you do #include <stddef.h>, the compiler/preprocessor correctly locates and includes it. Since you disable that with -nostdinc option of gcc, you are on your own (to correctly give the location of that header).

Custom glibc in non-standard path on machine with uclibc and gcc compiled against uclibc

I have machine with uClibc, and I've managed to get glibc work on it using a simple wrapper I made.
It can compile simple programs like hello world, and almost any other c program.
But, it doesn't compile most of gnu and others programs because of following error, when they include limits.h
In file included from /usr/glibc/include/limits.h:123:0,
from test.c:1:
/usr/lib/gcc/mips-openwrt-linux-uclibc/4.8.3/include/limits.h:125:26: error: no include path in which to search for limits.h
# include_next <limits.h>
What do I need to do to resolve this problem?
If someone needs it - I found how to get it work. You need to remove gcc's limits.h and rename gsyslimits.h to limits.h and edit glibc's limits.h, remove macros for defining if gcc header limits.h is used. If someone need - I can post complete both limits.h.
Sorry for my English. I'm Russian

GCC linaro compiler throws error "unknown type name size_t"

I am using GCC Linaro compiler for compiling my code. Its throwing the error unknown type name size_t from libio.h. Its included from stdio.h. In my code I am just including stdio.h.
Can any one please how to resolve this error.
As per C99, §7.17, size_t is not a builtin type but defined in <stddef.h>.
Including the <stddef.h> header should fix your problem.
For what it's worth, I had this exact same problem with a QT project, where I was using a Linaro compiler to (on both x86 Windows and x86 Linux) build for ARM Linux. Using the exact same code and .pro file, I had no problems building on Windows, but I had a litany of errors building on the Linux box, beginning with the unknown type name 'size_t' in libio.h which traced back to a #include <stdio.h>. I looked in the stdio.h (in the sysroot for the target hardware, not on the host machine), and a few lines down was #include <stddef.h> (much before #include <libio.h>), so stddef.h was definitely getting included. However, upon further inspection, stddef.h was completely empty with a file size of 1 byte. This was true for stddef.h in my sysroot and on my host machine. I have no idea why these files were empty.
Anyway, turns out I had an extraneous INCLUDEPATH += /usr/include/linux in my .pro file. On my Linux build machine, this added -I/usr/include/linux to the Makefile generated by qmake. On my Windows build machine, this added -isystem /usr/include/linux to the Makefile generated by qmake. Once I commented this out, these lines were removed from the Makefiles and it built right up on both build machines. -isystem /usr/include/linux apparently never caused any trouble on the Windows build machine, so there was no harm in removing INCLUDEPATH += /usr/include/linux.
I don't really know why this fixed my problem, but I suspect it was some kind of conflict between header files. Perhaps it was mixing host header files with sysroot header files, or creating a circular dependency somehow. GCC documentation says that anything included with the -I option will take precedence over a system header file. My best advice for this problem is to take a hard look at exactly which header files are being included and where they are coming from.
Both stdio.h and stdlib.h include the data type size_t. They include this data type because the functions declared in these headers either take size_t as a parameter, or return it as a return type. size_t itself is a typedef to an unsigned integral type and it's also returned by the sizeof operator.
And because the sizeof operator is built into the C Programming Language itself, not included via some library, then how can size_t be an unknown type name?

gcc 4.8.1: combining c code with c++11 code

I have not made ​​much effort to discover the cause, but gcc 4.8.1 is giving me a lot of trouble to compile old sources that combine c and c++ plus some new stuff in c++11
I've managed to isolate the problem in this piece of code:
# include <argp.h>
# include <algorithm>
which compiles fine with g++ -std=c++0x -c -o test-temp.o test-temp.C version 4.6.3, ubuntu 12.04
By contrast, with version 4.8.1, the same command line throws a lot of errors:
In file included from /home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/x86intrin.h:30:0,
from /home/lrleon/GCC/include/c++/4.8.1/bits/opt_random.h:33,
from /home/lrleon/GCC/include/c++/4.8.1/random:51,
from /home/lrleon/GCC/include/c++/4.8.1/bits/stl_algo.h:65,
from /home/lrleon/GCC/include/c++/4.8.1/algorithm:62,
from test-temp.C:4:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h: In function ‘__m64 _mm_cvtsi32_si64(int)’:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h:61:54: error: can’t convert between vector values of different size
return (__m64) __builtin_ia32_vec_init_v2si (__i, 0);
^
... and much more.
The same happens if I execute
g++ -std=c++11 -c -o test-temp.o test-temp.C ; again, version 4.8.1
But, if I swap the header lines, that is
# include <algorithm>
# include <argp.h>
then all compiles fine.
Someone enlighten me to understand what is happening?
I ran into the same problem. As it is really annoying, I hacked it down to <argp.h>.
This is the code (in standard gcc header argp.h) which trigger the error on ubuntu 14.04 / gcc 4.8.2:
/* This feature is available in gcc versions 2.5 and later. */
# if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 5) || __STRICT_ANSI__
# define __attribute__(Spec) /* empty */
# endif
This is probably to make headers compatible with old gcc AND to strict ANSI C++ definition. The problem is that --std=c++11 set the __STRICT_ANSI__ macro.
I've commented the #define __attribute__(spec) and the compilation worked fine !
As it is not practical to comment a system header, a workaround is to use g++ --std=gnu++11 instead of g++ --std=c++11 as it does not define __STRICT_ANSI__. It worked in my case.
It seems to be a bug in gcc.
This is a known bug, apparently some headers are missing extern "C" declarations at the right places:
I also just came across this issue with GCC 4.7.2 on Windows. It appears that all the intrin.h headers are missing the extern "C" part. Since the functions are always inline and thus the symbols never show up anywhere this has not been a problem before. But now that another header declares those functions a second time something must be done.
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56038
Two things come to mind:
1) It's a missing extern "C" in headers, which is not that rare.
2) Something is wrong with data alignment. Probably you are using STL container to store SSE types, which doesn't guarantee aligment for those. In this case you should implement custom allocator, which will use alligned_malloc. But I think it should've compiled fine in this case, but give you segfault at runtime. But who knows what compilers can detect now :)
Here's something your might want to read on that topic: About memory alignment ; About custom allocators
p.s. a piece of your code would be nice

Resources