#include <sys/syscall.h>
#define BUFSIZE 1024
main()
{
char buf[BUFSIZE];
int n;
while((n=read(0,buf,BUFSIZE))>0)
write(1,buf,n);
return 0;
}
When I compile this by using gcc, it is fine.
But use g++ I got :
inandout.c:7:32: error: ‘read’ was not declared in this scope
while((n=read(0,buf,BUFSIZE))>0)
^
inandout.c:8:22: error: ‘write’ was not declared in this scope
write(1,buf,n);
^
Why is that?
This is because gcc is a C compiler, g++ is a C++ compiler, and C and C++ are different languages.
If you want to compiler that source code as C++ program, you must change it to become C++. For example, there are no implicit function declarations in C++, so you must include unistd.h for read() and write() declarations. You also don't need syscall.h header.
Also, it is only that simple because you have a simple code snippet. Porting C code to C++ could be a nightmare as there are ~ 50 differences and in some cases code compiles well in both cases, but behaves differently.
P.S.: And instead of defining weird BUFSIZE yourself, consider using standard BUFSIZ :)
You Just need to add include <unistd.h>
C defaults functions that do not have a prototype to a function that returns an int - but you should have got warnings for that (did you use -Wall?).
C++ doesn't allow that, you need to include the correct header file, unistd.h, which you should also do in C.
I upgraded to gcc 4.8.5. In version 4.7 the compiler stopped including unistd.h in a number of include files. This is why older gcc compiler versions worked without including unistd.h.
https://gcc.gnu.org/gcc-4.7/porting_to.html
"C++ language issues
Header dependency changes
Many of the standard C++ library include files have been edited to no longer include unistd.h to remove namespace pollution. "
In my case I got ::write has not been declared when I included stdio.h, but my previous gcc version 4.4 compiled fine. This is a useful command to see what paths are being searched by the preprocessor: g++ -H test.cpp
Related
I am doing an assignment which states: "The skeleton code given uses getopt. If you compile the code with -std=c99, there will be compilation
error. To fix the error, include -D_POSIX_C_SOURCE=200809 when you compile the code."
I am very new to this. Ordinarily I compile a C program with GCC (program name) and then I type ./a.out.
What am I required to do here?
The sentence “To fix the error, include -D_POSIX_C_SOURCE=200809 when you compile the code” means to include the characters -D_POSIX_C_SOURCE=200809 in the command you use to compile the program.
For example, if you normally use gcc -o foo foo.c, change it to gcc -o foo -D_POSIX_C_SOURCE=200809 foo.c.
This is a command line argument that tells the compiler to define a preprocessor macro named _POSIX_C_SOURCE to be replaced by 200809. This preprocessor macro is used by various header files to adapt to different versions of POSIX (by using #if statements to test the macro). For example, if you specify _POSIX_C_SOURCE to be 200809 or leave it undefined, the headers will not declare routines that were only added to POSIX after the 2008-09 version of POSIX. Among other things, this avoids causing conflicts with programs written before then that might have happened to use names of those routines for other purposes (since they would have had no way of knowing what names POSIX header would define in the future).
You can also define the macro in your source code, before any headers that use it are included, with:
#define _POSIX_C_SOURCE 200809
The getopt function is a relatively recent addition to the Single UNIX Specification/POSIX Standard. While Linux doesn't comply with POSIX, it does roughly use this standard as a reference point. However, it's mostly implementing XSH (System Headers) of POSIX '03 or earlier by default for compatibility. If you want more recent additions exposed (Note: #JonathanLeffler mentions that getopt is there for quite some time already, but Linux doesn't expose it by default anyway), you can tell the GNU libc (the C Library commonly used on GNU/Linux systems) to also provide some of that functionality which are hidden behind Feature Test Macros. Lookup the man-page man -s 7 feature_test_macros 2 in combination with man -s 3 getopt 1 for more. Basically, in the respective headers there's some code similar to the following:
#if _POSIX_C_SOURCE >= 200809L
/* declaration of getopt() and other newer functions */
#endif
If you then include that file and do not define the feature test macro to have a value greater than (newer/more recent than) the date of that POSIX standard you need (2008-09), the C Preprocessor will throw away all those forward declarations, making your code error out.
Using -DFOO=bar you #define FOO bar on the command line for the standard C Compiler. By the way, the GNU C Compiler also sets some of such flags when you use -std=c99.
In the end, your command line should look more or less like this:
$ c99 -D_POSIX_C_SOURCE=200809L -o foo foo.c
This will compile and link foo.c to the output file foo adhering to the C99 standard and using features from POSIX '08 3.
I am currently writing a C program with threads and I make use of pthread_cleanup_push_defer_np() and pthread_cleanup_pop_restore_np(). Provided that:
I have included pthread.h;
I am compiling with -pthread option;
I am on Ubuntu 14.04 and using gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1);
when compiling I get back a nasty undefined reference error to the above mentioned functions and I can't figure out why. I have tried to take a look into pthread.h and it seems those two functions are commented out, so I am wondering whether I need to enable them in some way or use some other kind of options. I've read the manual pages and google it up but I can't find a solution, so I would appreciate a little help. Here it is a snippet:
void *start_up(void *arg)
{
char *timestamp;
// ... code ...
timestamp = get_current_timestamp("humread");
pthread_cleanup_push_defer_np(free, timestamp);
// ... some other code
pthread_cleanup_pop_restore_np(1);
// ... more code ...
}
I compile with
gcc -pthread -o server *.c
The manual of pthread_cleanup_push_defer_np and pthread_cleanup_pop_restore_np say these two (non portable) functions are GNU extensions and are enabled
by defining _GNU_SOURCE:
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
pthread_cleanup_push_defer_np(), pthread_cleanup_pop_defer_np():
_GNU_SOURCE
This results in linker error (as opposed to compile time error) because your compiler is pre-C99 (or you are compiling in pre-C99 mode) and assumes these functions return int by default.
The rule functions-return-int if no prototype is present has been removed since C99. Enabling more compiler switches can help you with better diagnostics.
I have not made much effort to discover the cause, but gcc 4.8.1 is giving me a lot of trouble to compile old sources that combine c and c++ plus some new stuff in c++11
I've managed to isolate the problem in this piece of code:
# include <argp.h>
# include <algorithm>
which compiles fine with g++ -std=c++0x -c -o test-temp.o test-temp.C version 4.6.3, ubuntu 12.04
By contrast, with version 4.8.1, the same command line throws a lot of errors:
In file included from /home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/x86intrin.h:30:0,
from /home/lrleon/GCC/include/c++/4.8.1/bits/opt_random.h:33,
from /home/lrleon/GCC/include/c++/4.8.1/random:51,
from /home/lrleon/GCC/include/c++/4.8.1/bits/stl_algo.h:65,
from /home/lrleon/GCC/include/c++/4.8.1/algorithm:62,
from test-temp.C:4:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h: In function ‘__m64 _mm_cvtsi32_si64(int)’:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h:61:54: error: can’t convert between vector values of different size
return (__m64) __builtin_ia32_vec_init_v2si (__i, 0);
^
... and much more.
The same happens if I execute
g++ -std=c++11 -c -o test-temp.o test-temp.C ; again, version 4.8.1
But, if I swap the header lines, that is
# include <algorithm>
# include <argp.h>
then all compiles fine.
Someone enlighten me to understand what is happening?
I ran into the same problem. As it is really annoying, I hacked it down to <argp.h>.
This is the code (in standard gcc header argp.h) which trigger the error on ubuntu 14.04 / gcc 4.8.2:
/* This feature is available in gcc versions 2.5 and later. */
# if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 5) || __STRICT_ANSI__
# define __attribute__(Spec) /* empty */
# endif
This is probably to make headers compatible with old gcc AND to strict ANSI C++ definition. The problem is that --std=c++11 set the __STRICT_ANSI__ macro.
I've commented the #define __attribute__(spec) and the compilation worked fine !
As it is not practical to comment a system header, a workaround is to use g++ --std=gnu++11 instead of g++ --std=c++11 as it does not define __STRICT_ANSI__. It worked in my case.
It seems to be a bug in gcc.
This is a known bug, apparently some headers are missing extern "C" declarations at the right places:
I also just came across this issue with GCC 4.7.2 on Windows. It appears that all the intrin.h headers are missing the extern "C" part. Since the functions are always inline and thus the symbols never show up anywhere this has not been a problem before. But now that another header declares those functions a second time something must be done.
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56038
Two things come to mind:
1) It's a missing extern "C" in headers, which is not that rare.
2) Something is wrong with data alignment. Probably you are using STL container to store SSE types, which doesn't guarantee aligment for those. In this case you should implement custom allocator, which will use alligned_malloc. But I think it should've compiled fine in this case, but give you segfault at runtime. But who knows what compilers can detect now :)
Here's something your might want to read on that topic: About memory alignment ; About custom allocators
p.s. a piece of your code would be nice
I have these two files:
// first.c
int main(void) {
putint(3);
}
and
// second.c
#include <stdio.h>
void putint(int n) {
printf("%d",n);
getchar();
}
When I run gcc 4.6.1 under Win XP:
gcc first.c second.c -o program.exe
It has no problem and writes 3 to stdout. It doesn't need putint declaration in first.c. How is this possible? Is this standard behavior?
I have tested this on MSVC 2008 Express and it runs only with the declaration as expected.
// first.c
void putint(int);
int main(void) {
putint(3);
}
Solved, thanks for hints, these options helped to show the warning:
-Wimplicit
-std=c99 (MinGW 4.6 still uses gnu90 by default)
This is a legacy "feature" of C that should not be used as of several decades ago. You should use a compiler with settings that will warn you if you do something like this. Gcc has several switches that you should specify when using it & one of them will give you a warning for this.
Edit: I haven't been using gcc myself, but switches that you should check out are -pedantic, -Wall, -Wextra, and -std.
The compiler that is accepting this is assuming, per the old language definition, that since you didn't see fit to tell it otherwise, the function a) returns an int value and b) since you pass it an int (or if you passed it something that could be promoted to an int) the function expects that argument to be an int.
As #veer correctly points out, this should generally work in your particular case. In other cases, however, differences between the implicit assumptions for a function without a prototype and the function's actual signature would make things go boom.
This isn't just for MinGW, but all standard versions of gcc. As noted, this is legal in C89; gcc defaults to 'gnu89' (not 99), which also accepts the code without warning. If you switch to c99 or gnu99 (or later, such as c11) you'll get a warning by default, but it will still compile.
As is noted by others, this is standard behavior for C conforming compilers. Naming your files .c partially puts it in C mode. It'll have fun things like "built-in functions" (printf() etc.) and all sorts of legacy C things.
I'd like to add to what others have said that I experienced recently, though. MS expressly dropped support for C past C90, and their C90 support is poor to say the least. I'm not entirely sure standard ANSI C90 codebases would compile under newer VS's, because it is basically the C++ compiler with lots of stuff disabled (whereas GCC actually has a C compiler). They did this in order to promote C++. If you want to use real C, you can't really do it in MS Visual Studio, any edition, unless you want to be declaring all your variables at the start of functions, etc.
Recently I've come across a problem in my project. I normally compile it in gcc-4, but after trying to compile in gcc-3, I noticed a different treatment of inline functions. To illustrate this I've created a simple example:
main.c:
#include "header.h"
#include <stdio.h>
int main()
{
printf("f() %i\n", f());
return 0;
}
file.c:
#include "header.h"
int some_function()
{
return f();
}
header.h
inline int f()
{
return 2;
}
When I compile the code in gcc-3.4.6 with:
gcc main.c file.c -std=c99 -O2
I get linker error (multiple definition of f), the same if I remove the -O2 flag. I know the compiler does not have to inline anything if it doesn't want to, so I assumed it placed f in the object file instead of inlining it in case of both main.c and file.c, thus multiple definition error. Obviously I could fix this by making f static, then, in the worst case, having a few f's in the binary.
But I tried compiling this code in gcc-4.3.5 with:
gcc main.c file.c -std=c99 -O2
And everything worked fine, so I assumed the newer gcc inlined f in both cases and there was no function f in the binary at all (checked in gdb and I was right).
However, when I removed the -O2 flag, I got two undefined references to int f().
And here, I really don't understand what is happening. It seems like gcc assumed f would be inlined, so it didn't add it to the object file, but later (because there was no -O2) it decided to generate calls to these functions instead of inlining and that's where the linker error came from.
Now comes the question: how should I define and declare simple and small functions, which I want inline, so that they can be used throughout the project without the fear of problems in various compilers? And is making all of them static the right thing to do? Or maybe gcc-4 is broken and I should never have multiple definitions of inline functions in a few translation units unless they're static?
Yes, the behavior has been changed from gcc-4.3 onwards. The gcc inline doc (http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Inline.html) details this.
Short story: plain inline only serves to tell gcc (in the old version anyway) to
inline calls to the from the same file scope. However, it does not tell gcc that
all callers would be from the file scope, thus gcc also keeps a linkable version
of f() around: which explains your duplicate symbols error above.
Gcc 4.3 changed this behavior to be compatible with c99.
And, to answer your specific question:
Now comes the question: how should I define and declare simple and small functions, which I want inline, so that they can be used throughout the project without the fear of problems in various compilers? And is making all of them static the right thing to do? Or maybe gcc-4 is broken and I should never have multiple definitions of inline functions in a few translation units unless they're static?
If you want portability across gcc versions use static inline.