I'm trying to convert an integer to a character to write to a file, using this line:
fputc(itoa(size, tempBuffer, 10), saveFile);
and I receive this warning and message:
warning: implicit declaration of 'itoa'
undefined reference to '_itoa'
I've already included stdlib.h, and am compiling with:
gcc -Wall -pedantic -ansi
Any help would be appreciated, thank you.
itoa is not part of the standard. I suspect either -ansi is preventing you from using it, or it's not available at all.
I would suggest using sprintf()
If you go with the c99 standard, you can use snprintf() which is of course safer.
char buffer[12];
int i = 20;
snprintf(buffer, 12,"%d",i);
This here tells you that during the compilation phase itoa is unknown:
warning: implicit declaration of
'itoa'
so if this function is present on your system you are missing a header file that declares it. The compiler then supposes that it is a function that takes an unspecific number of arguments and returns an int.
This message from the loader phase
undefined reference to '_itoa'
explains that also the loader doesn't find such a function in any of the libraries he knows of.
So you should perhaps follow Brian's advice to replace itoa by a standard function.
Related
According to C How to Program (Deitel):
Standard library functions like printf and scanf are not part of the C programming language. For example, the compiler cannot find a spelling error in printf or scanf. When the compiler compiles a printf statement, it merely provides space in the object program for a “call” to the library function. But the compiler does not know where the library functions are—the linker does. When the linker runs, it locates the library functions and inserts the proper calls to these library functions in the object program. Now the object program is complete and ready to be executed. For this reason, the linked program is called an executable. If the function name is misspelled, it is the linker which will spot the error, because it will not be able to match the name in the C program with the name of any known function in the libraries.
These statements leave me doubtful because of the existence of header file. These files are included during the preprocessing phase, before the compiling one, and, as I read, there are used by the compiler.
So if I write print instead of printf how can't the compiler see that there is no function declared with that name and throw an error?
If it is as the book says, why can I declare function in header files if the compiler doesn't watch them?
So if I write print instead of printf how can't the compiler see that there is no function declared with that name and throw an error?
You are right. If you made a typo in any function name, any modern compiler should complain about it. For example, gcc complains for the following code:
$ cat test.c
int main(void)
{
unknown();
return 0;
}
$ gcc -c -Wall -Wextra -std=c11 -pedantic-errors test.c
test.c: In function ‘main’:
test.c:3:5: error: implicit declaration of function ‘unknown’ [-Wimplicit-function-declaration]
unknown();
^
However, in pre C99 era of C language, any function whose declaration isn't seen by the compiler, it'll assume the function returns an int. So, if you are compiling in pre-C99 mode then a compiler isn't required to warn about it.
Fortunately, this implicit int rule was removed from the C language since C99 and a compiler is required to issue a diagnostic for it in modern C (>= C99).
But if you provide only a declaration or prototype for the function:
$ cat test.c
int unknown(void); /* function prototype */
int main(void)
{
unknown();
return 0;
}
$ gcc -c -Wall -Wextra -std=c89 -std=c11 test.c
$
(Note: I have used -c flag to just compile without linking; but if you don't use -c then compiling & linking will be done in a single step and the error would still come from the linker).
There's no issue despite the fact, you do not have definition for unknown() anywhere. This is because the compiler assumes unknown() has been defined elsewhere and only when the linker looks to resolve the symbol unknown, it'll complain if it can't find the definition for unknown().
Typically, the header file(s) only provide the necessary declarations or prototypes (I have provided a prototype for unknown directly in the file itself in the above example -- it might as well be done via a header file) and usually not the actual definition. Hence, the author is correct in that sense that the linker is the one that spots the error.
So if I write print instead of printf how can't the compiler see that there is no function declared with that name and throw an error?
The compiler can see that there is no declaration in scope for the identifier designating the function. Most will emit a warning under those circumstances, and some will emit an error, or can be configured to do so.
But that's not the same thing as the compiler detecting that the function doesn't exist. It's the compiler detecting that the function name has not been declared. The compiler will exhibit the same behavior if you spell the function name correctly but do not include a prior declaration for it.
Furthermore, C90 and pre-standardization C permitted calls to functions without any prior declaration. Such calls do not conform to C99 or later, but most compilers still do accept them (usually with a warning) for compatibility purposes.
If it is as the book says, why can I declare function in header files if the compiler doesn't watch them?
The compiler does see them, and does use the declarations. Moreover, it relies on the prototype, if the declaration provides one, to perform appropriate argument and return value conversions when you call the function. Moreover, if you use functions whose argument types are altered by the default argument promotions, then your calls to such functions are non-conforming if no prototype is in scope at the point of the call. Undefined behavior results.
For some bizarre reason, when I try to use the function get_current_dir_name with MinGW GCC compiler,
I get this result on linkage:
undefined reference to `get_current_dir_name'
collect2.exe: error: ld returned 1 exit status
But, I get this only when using the function like this
printf("%i", get_current_dir_name());
or this
printf("%s", get_current_dir_name());
When I try to do
printf(get_current_dir_name());
I get this, which makes no sense, because the function returns a char *, according to docs:
tester.c: In function 'main':
tester.c:16:2: warning: passing argument 1 of 'printf' makes pointer from integer without a cast [enabled by default]
printf(get_current_dir_name());
^
In file included from tester.c:1:0:
c:\mingw\include\stdio.h:294:37: note: expected 'const char *' but argument is of type 'int'
_CRTIMP int __cdecl __MINGW_NOTHROW printf (const char*, ...);
Google seem to really dislike talking about C, because I can find how to get the workdir on almost any existing language, except C. The only thing that pops up are some docs, which describe 3 functions: getcwd, getwd, and get_current_dir_name. I really want to use the get_current_dir_name one because of it's cleanness.
How do I deal with this? Is this a minGW bug? Or am I missing something?
You apparently failed to include any header that contains a declaration of get_current_dir_name(). Thus, the compiler will assume a return value of int, which is not a valid first argument for printf() (you should increase the warning levels so you'll get an error instead of just a warning).
Furthermore, linking fails, so you also do not link against a library that implements the function, which is expected: get_current_dir_name() is a GNU extension and not part of the C standard library.
On Windows, you need to use the equivalent functionality provided by the Windows API, ie GetCurrentDirectory(), declared in windows.h.
My compiler (gcc) throws warnings (not errors!) on the line which declares fp:
int fd = open("filename.dat", O_RDONLY);
FILE* fp = fdopen(fd, "r"); // get a file pointer fp from the file descriptor fd
These are the warnings:
main.c: In function ‘main’:
main.c:606: warning: implicit declaration of function ‘fdopen’
main.c:606: warning: initialization makes pointer from integer without a cast
I do not understand these warnings since the return value of fopen is a FILE*. What is the mistake I am making here?
EDIT: I am including stdio.h (and I am also on Linux).
Short answer: use -std=gnu99 when compiling, the usual standard is non-POSIX and does not have fdopen.
warning: implicit declaration of function ‘fdopen’
Means you have forgot to include the header file which the declaration of fdopen() resides in. Then an implicit declaration by the compiler occurs - and that means the return value of the unknown function will be assumed to be int - thus the second warning. You have to write
#include <stdio.h>
Edit: if you properly include stdio.h, then fdopen() might not be available on the system you're targeting. Are you on Windows? This function is POSIX-only.
Edit 2: Sorry, I really should have perceived this. C99 means the ANSI C99 standard - and standard C doesn't force the concept of file descriptors in order to support non-POSIX systems, so it provides fopen() only. fdopen() is related to file descriptors, so it's POSIX-only, so it's not part of standard C99. If you use the -std=gnu99 switch for GCC, it gets rid of the standard's restrictions and lets in the POSIX and GNU-only extensions, essentially fixing your problem.
#define _XOPEN_SOURCE 600
#include <stdio.h>
This conforms perfectly with strict c99
gcc -std=c99 -pedantic -Wall -Wextra -Werror
You are not including #include <stdio.h> in C the compiler therefore "guesses" the declaration of the function you're trying to call. (Taking the parameters you've based and using int as return value). Usually you don't want such guesses therefore the compiler warns you.
Solution: Add proper #includes.
The fdopen function is not part of the C standard and is not available as part of the standard headers if you compile in standard C mode. So you either need to use -std=gnu99 instead of -std=c99 to compile your source or declare the function yourself.
There's a good explanation for the compiler's diagnostic in #H2CO3's answer, so let's only look on the why of things: if you're using glibc (and you probably are), certain POSIX functions may require specific feature test macros to show up.
In particular, you may need to put the following line:
#define _POSIX_SOURCE
// or #define _XOPEN_SOURCE
before
#include <stdio.h>
Certain compilers (such as gcc) also have command line options to the same effect (all the gnu* standards options in gcc).
I am a little puzzled. I have project that I compile with
CFLAGS=-g -O2 -Wall -Wextra -Isrc/main -pthread -rdynamic -DNDEBUG $(OPTFLAGS) -D_FILE_OFFSET_BITS=64 -D_XOPEN_SOURCE=700
Now I want to use mkdtemp and therefor include unistd.h
char *path = mkdtemp(strdup("/tmp/test-XXXXXX"));
On MacOSX the compilation gives some warnings
warning: implicit declaration of function ‘mkdtemp’
warning: initialization makes pointer from integer without a cast
but compiles through. While mkdtemp does return a non-NULL path accessing it results in a EXC_BAD_ACCESS.
Question 1: The template is strdup()ed and the result is non-NULL. How on earth can this result in an EXC_BAD_ACCESS?
Now further down the rabbit hole. Let's get rid of the warnings. Checking unistd.h I find the declaration hidden by the pre processor.
#if !defined(_POSIX_C_SOURCE) || defined(_DARWIN_C_SOURCE)
...
char *mkdtemp(char *);
...
#endif
Adding -D_DARWIN_C_SOURCE to the build makes all the problems go away but leaves me with a platform specific build. The 10.6 man page just says
Standard C Library (libc, -lc)
#include <unistd.h>
Removing the _XOPEN_SOURCE from the build makes is work on OSX but then it fails to compile under Linux with
warning: ‘struct FTW’ declared inside parameter list
warning: its scope is only this definition or declaration, which is probably not what you want
In function ‘tmp_remove’:
warning: implicit declaration of function ‘nftw’
error: ‘FTW_DEPTH’ undeclared (first use in this function)
error: (Each undeclared identifier is reported only once
error: for each function it appears in.)
error: ‘FTW_PHYS’ undeclared (first use in this function)
Question 2: So how would you fix this?
The only fix I have found is to #undef _POSIX_C_SOURCE right before the unistd.h include ...but that feels like an ugly hack.
You've asked two questions here, and I'm just going to answer the first:
Question 1: The template is strdup()ed and the result is non-NULL. How on earth can this result in an EXC_BAD_ACCESS?
As the warnings above tell you:
warning: implicit declaration of function ‘mkdtemp’
This means it couldn't find the declaration for mkdtemp. By C rules, that's allowed, but it's assuming the function returns an int.
warning: initialization makes pointer from integer without a cast
You've told the compiler "I've got a function that returns int, and I want to store the value in a char*". It's warning you that this is a bad idea. You can still do it, and therefore it compiles.
But think about what happens at runtime. The actual code you link to returns a 64-bit char*. Then your code treats that as a 32-bit int that it has to cast to a 64-bit char*. How likely is that to work?
This is why you don't ignore warnings.
And now for the second question:
Question 2: So how would you fix this?
Your problem is that you're explicitly passing -D_XOPEN_SOURCE=700, but you're using a function, mkdtemp, that isn't defined in the standard you're demanding. That means your code shouldn't work. The fact that it does work on linux doesn't mean your code is correct or portable, just that you happened to get lucky on one platform.
So, there are two rather obvious ways to fix this:
If you want to use _XOPEN_SOURCE=700, rewrite your code to only use functions that are in that standard.
If you've only added _XOPEN_SOURCE=700 as a hack that you don't really understand because it seemed to fix some other problem on linux, find the right way to fix that problem on linux.
It may turn out that there's a bug on one platform or another so there just is no right way to fix it. Or, more likely, you're using a combination of non-standard functions that can be squeezed in on different platforms with a different set of flags on each. In that case, your Makefile (or whatever drives the build) will have to pass different flags to the compiler on different platforms. This is pretty typical for cross-platform projects; just be glad you only have one flag to worry about, and aren't building 3000 lines worth of autoconf.
I read in the c99 Standard:
-remove implicit function declaration,
-remove implicit int.
But when I try to compile this code with gcc compiler in c99 mode using -pedantic
main(void){
f(3);
return 0;
}
int f(int a){
....
}
I expect 2 errors, but I just receive 2 warnings:
-warning: return type defaults to ‘int’
-warning: implicit declaration of function ‘f’.
Shouldn't them be errors in c99?
http://gcc.gnu.org/c99status.html
In both situations there's written "done".
Thanks.
The C standard requires a diagnostic for any translation unit containing a violation of a syntax rule or constraint. It does not require such diagnostics to be fatal; the compiler is free to continue processing the source file. The behavior of the resulting executable, if any, is undefined. The standard makes no distinction between warnings and fatal errors.
(The only thing that requires a compiler to reject a source file is the #error directive.)
Conclusion: when compiling C, take warnings very seriously.
I don't believe the compiler is required to produce a fatal error. Use -Werror if you're concerned...
Two points: first, it may (usually does) take a specific set of flags to get a compiler to conform with the standard.
Second, all that's required by the standard is that the implementation issue a "diagnostic" in the case of an error -- but it's up to the implementation to define what is or isn't a diagnostic. It's free to say a "warning" is a diagnostic if it wants to. When a diagnostic is issued, it may quit compiling, or it can compile the code anyway.
Bottom line: what it's doing is probably enough to conform, for whatever that's worth.