gcc 4.8.1: combining c code with c++11 code - c

I have not made ​​much effort to discover the cause, but gcc 4.8.1 is giving me a lot of trouble to compile old sources that combine c and c++ plus some new stuff in c++11
I've managed to isolate the problem in this piece of code:
# include <argp.h>
# include <algorithm>
which compiles fine with g++ -std=c++0x -c -o test-temp.o test-temp.C version 4.6.3, ubuntu 12.04
By contrast, with version 4.8.1, the same command line throws a lot of errors:
In file included from /home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/x86intrin.h:30:0,
from /home/lrleon/GCC/include/c++/4.8.1/bits/opt_random.h:33,
from /home/lrleon/GCC/include/c++/4.8.1/random:51,
from /home/lrleon/GCC/include/c++/4.8.1/bits/stl_algo.h:65,
from /home/lrleon/GCC/include/c++/4.8.1/algorithm:62,
from test-temp.C:4:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h: In function ‘__m64 _mm_cvtsi32_si64(int)’:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h:61:54: error: can’t convert between vector values of different size
return (__m64) __builtin_ia32_vec_init_v2si (__i, 0);
^
... and much more.
The same happens if I execute
g++ -std=c++11 -c -o test-temp.o test-temp.C ; again, version 4.8.1
But, if I swap the header lines, that is
# include <algorithm>
# include <argp.h>
then all compiles fine.
Someone enlighten me to understand what is happening?

I ran into the same problem. As it is really annoying, I hacked it down to <argp.h>.
This is the code (in standard gcc header argp.h) which trigger the error on ubuntu 14.04 / gcc 4.8.2:
/* This feature is available in gcc versions 2.5 and later. */
# if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 5) || __STRICT_ANSI__
# define __attribute__(Spec) /* empty */
# endif
This is probably to make headers compatible with old gcc AND to strict ANSI C++ definition. The problem is that --std=c++11 set the __STRICT_ANSI__ macro.
I've commented the #define __attribute__(spec) and the compilation worked fine !
As it is not practical to comment a system header, a workaround is to use g++ --std=gnu++11 instead of g++ --std=c++11 as it does not define __STRICT_ANSI__. It worked in my case.
It seems to be a bug in gcc.

This is a known bug, apparently some headers are missing extern "C" declarations at the right places:
I also just came across this issue with GCC 4.7.2 on Windows. It appears that all the intrin.h headers are missing the extern "C" part. Since the functions are always inline and thus the symbols never show up anywhere this has not been a problem before. But now that another header declares those functions a second time something must be done.
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56038

Two things come to mind:
1) It's a missing extern "C" in headers, which is not that rare.
2) Something is wrong with data alignment. Probably you are using STL container to store SSE types, which doesn't guarantee aligment for those. In this case you should implement custom allocator, which will use alligned_malloc. But I think it should've compiled fine in this case, but give you segfault at runtime. But who knows what compilers can detect now :)
Here's something your might want to read on that topic: About memory alignment ; About custom allocators
p.s. a piece of your code would be nice

Related

GCC how to stop false positive warning implicit-function-declaration for functions in ROM?

I want to get rid of all implicit-function-declaration warnings in my codebase. But there is a problem because some functions are
programmed into the microcontroller ROM at the factory and during linking a linker script provides only the function address. These functions are called by code in the SDK.
During compilation gcc of course emits the warning implicit-function-declaration. How can I get rid of this warning?
To be clear I understand why the warning is there and what does it mean. But in this particular case the developers of SDK guarantee that the code will work with implicit rules (i.e. implicit function takes only ints and returns an int). So this warning is a false positive.
This is gnu-C-99 only, no c++.
Ideas:
Guess the argument types, write a prototype in a header and include that?
Tell gcc to treat such functions as false positive with some gcc attribute?
You can either create a prototype function in a header, or suppress the warnings with the following:
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wimplicit-function-declaration"
/* line where GCC complains about implicit function declaration */
#pragma GCC diagnostic pop
Write a small program that generates a header file romfunctions.h from the linker script, with a line like this
int rom_function();
for each symbol defined by the ROM. Run this program from your Makefiles. Change all of the files that use these functions to include romfunctions.h. This way, if the linker script changes, you don't have to update the header file by hand.
Because most of my programming expertise was acquired by self-study, I intentionally have become somewhat anal about resolving non-fatal warnings, specifically to avoid picking up bad coding habits. But, this has revealed to me that such bad coding habits are quite common, even from formally trained programmers. In particular, for someone like me who is also anal about NOT using MS Windows, my self-study of so-called platform-independent code such as OpenGL and Vulkan has revealed a WORLD of bad coding habits, particularly as I examine code written assuming the student was using Visual Studio and a Windows C/C++ compiler.
Recently, I encountered NUMEROUS non-fatal warnings as I designed an Ubuntu Qt Console implementation of an online example of how to use SPIR-V shaders with OpenGL. I finally threw in the towel and added the following lines to my qmake .PRO file to get rid of the non-fatal-warnings (after, first, studying each one and convincing myself it could be safely ignored) :
QMAKE_CFLAGS += -Wno-implicit-function-declaration
-Wno-address-of-packed-member
[Completely written due to commends]
You are compiling the vendor SDK with your own code. This is not typically what you want to do.
What you do is you build their SDK files with gcc -c -Wno-implicit-function-declaration and and your own files with gcc -c or possibly gcc -o output all-your-c-files all-their-o-files.
C does not require that declarations be prototypes, so you can get rid of the problem (which should be a hard error, not a warning, since implicit declarations are not valid C) by using a non-prototype declaration, which requires only knowing the return type. For example:
int foo();
Since "implicit declarations" were historically treated as returning int, you can simply use int for all of them.
If you are using C program, use
#include <stdio.h>

How to use C only features in latest compilers?

I am using codeblock 13.12 and it uses mingw (GCC 4.7 & 4.8 Series)
It supports call by reference (func1(int &a)) eventhough I am selecting C project and not CPP project. If I am not mistaken, there is no concept of call by reference in C and everything is call by value even if it is making use of pointers.
My question is how to use C only features? Any settings for this? I saw that in toolchain it is using mingw32-gcc.exe for c compilations.
How to know which compiler version (Like C11, C99 etc) it is really using?
Name your files with an extension of .c. And definitely not .cc or .cpp
Compile with gcc as the command line, not g++
And if in doubt, use the -std= command line parameter to force the flavor of C you want (e.g. -std=C90, -std=C99, or even -std=C11 ). There's also -ansi.
Also, a cheap and dirty way to validate if your code is getting compiled as C and not C++ is to add this block of code within your source code. If it's C++, then the compiler will generate an error.
#ifdef __cplusplus
int compile_time_assert[-1];
#endif

Compiling issue: undefined reference to pthread_cleanup_push_defer_np() and pthread_cleanup_pop_restore_np()

I am currently writing a C program with threads and I make use of pthread_cleanup_push_defer_np() and pthread_cleanup_pop_restore_np(). Provided that:
I have included pthread.h;
I am compiling with -pthread option;
I am on Ubuntu 14.04 and using gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1);
when compiling I get back a nasty undefined reference error to the above mentioned functions and I can't figure out why. I have tried to take a look into pthread.h and it seems those two functions are commented out, so I am wondering whether I need to enable them in some way or use some other kind of options. I've read the manual pages and google it up but I can't find a solution, so I would appreciate a little help. Here it is a snippet:
void *start_up(void *arg)
{
char *timestamp;
// ... code ...
timestamp = get_current_timestamp("humread");
pthread_cleanup_push_defer_np(free, timestamp);
// ... some other code
pthread_cleanup_pop_restore_np(1);
// ... more code ...
}
I compile with
gcc -pthread -o server *.c
The manual of pthread_cleanup_push_defer_np and pthread_cleanup_pop_restore_np say these two (non portable) functions are GNU extensions and are enabled
by defining _GNU_SOURCE:
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
pthread_cleanup_push_defer_np(), pthread_cleanup_pop_defer_np():
_GNU_SOURCE
This results in linker error (as opposed to compile time error) because your compiler is pre-C99 (or you are compiling in pre-C99 mode) and assumes these functions return int by default.
The rule functions-return-int if no prototype is present has been removed since C99. Enabling more compiler switches can help you with better diagnostics.

Gcc and g++ different on write()

#include <sys/syscall.h>
#define BUFSIZE 1024
main()
{
char buf[BUFSIZE];
int n;
while((n=read(0,buf,BUFSIZE))>0)
write(1,buf,n);
return 0;
}
When I compile this by using gcc, it is fine.
But use g++ I got :
inandout.c:7:32: error: ‘read’ was not declared in this scope
while((n=read(0,buf,BUFSIZE))>0)
^
inandout.c:8:22: error: ‘write’ was not declared in this scope
write(1,buf,n);
^
Why is that?
This is because gcc is a C compiler, g++ is a C++ compiler, and C and C++ are different languages.
If you want to compiler that source code as C++ program, you must change it to become C++. For example, there are no implicit function declarations in C++, so you must include unistd.h for read() and write() declarations. You also don't need syscall.h header.
Also, it is only that simple because you have a simple code snippet. Porting C code to C++ could be a nightmare as there are ~ 50 differences and in some cases code compiles well in both cases, but behaves differently.
P.S.: And instead of defining weird BUFSIZE yourself, consider using standard BUFSIZ :)
You Just need to add include <unistd.h>
C defaults functions that do not have a prototype to a function that returns an int - but you should have got warnings for that (did you use -Wall?).
C++ doesn't allow that, you need to include the correct header file, unistd.h, which you should also do in C.
I upgraded to gcc 4.8.5. In version 4.7 the compiler stopped including unistd.h in a number of include files. This is why older gcc compiler versions worked without including unistd.h.
https://gcc.gnu.org/gcc-4.7/porting_to.html
"C++ language issues
Header dependency changes
Many of the standard C++ library include files have been edited to no longer include unistd.h to remove namespace pollution. "
In my case I got ::write has not been declared when I included stdio.h, but my previous gcc version 4.4 compiled fine. This is a useful command to see what paths are being searched by the preprocessor: g++ -H test.cpp

Check whether function is declared with C preprocessor?

Is it possible to tell the C preprocessor to check whether a function (not a macro) is declared? I tried the following, but it doesn't appear to work:
#include <stdio.h>
int main(void)
{
#if defined(printf)
printf("You support printf!\n");
#else
puts("Either you don't support printf, or this test doesn't work.");
#endif
return 0;
}
No. Preprocessor runs before the C compiler and the C compiler processes function declarations. The preprocessor is only there for text processing.
However, most header files have include guard macros like _STDIO_H_ that you can test for in the preprocessor stage. However, that solution is not portable as the include guard macro names are not standardized.
If you look at tools like autoconf you will see that they go through many tests to determine what a computer has or doesn't have, to compile properly, then they set the correct #DEFINES.
You may want to look at that model, and that tool if you are on some flavor of unix, as what you want to do isn't going to be possible, as others undoubtedly are pointing out.
Strictly speaking no, the preprocessor can't do it on its own. However, you can give it a little help by creating appropriate #defines automatically.
Normally as was mentioned above, you'd use autotools if on a unix type system. However, you can also create the same effect using a makefile. I recently had cause to detect the "posix_fallocate" function being defined in headers, because I was using uClibc which seemed to omit it in earlier versions. This works in gnu make, but you can probably get a similar thing to work in other versions:
NOFALLOC := $(shell echo "\#include <fcntl.h>\nint main() { posix_fallocate(0,0,0);}" | $(CC) -o /dev/null -Werror -xc - >/dev/null 2>/dev/null && echo 0 || echo 1)
ifeq "$(NOFALLOC)" "1"
DFLAGS += -DNO_POSIX_FALLOCATE
endif
The preprocessor is a simple program and knows next to nothing about the underlying language. It cannot tell if a function has been declared. Even if it could, the function may be defined in another library and the symbol is resolved during linking, so the preprocessor could not help in that regard.
Since the preprocessor is not aware of the language C/C++ (it really only does text-replacement) I would guess that this is not possible. Why do you want to do this? Maybe there is another way.

Resources