This program goes against everything I have been taught and learned in C. How does this compile? Why does this not need to be int main? Why no return 0? Don't you need an initial declaration of sub() above main? That bugs the crap out of me. I like keeping my functions above main.
#include <stdio.h>
main()
{
sub ();
sub ();
}
sub()
{
static int y = 5;
printf(" y is %d \n",y);
y++;
}
The gcc version is:
gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
This is an old version it seems but not to crazy old.
https://www.gnu.org/software/gcc/releases.html
How do I check if this is c90 or c89?
This code uses an obsolete feature of early C called implicit int. Its only uses are in code-golf competitions. Indeed, even variables may be declared this way. The variable y might easily have been declared
static y = 5;
.
A function may be called without having been prototyped. The function is assumed to receive exactly the number of arguments passed, subject to the "usual promotions". Any type smaller than an int is promoted to int, and floats are promoted to double.
So the functions behave as if they were prototyped as:
int main(void);
int sub(void);
To return any type other than int, the return type must be specified.
You can specify the standard you wish to use when compiling.
gcc -ansi
gcc -std=c99
And add -pedantic to make gcc believe that you really mean it.
Oddly enough, this code does not conform strictly to any standard. C99 disallows implicit int, but permits eliding return 0; from main. C90 or "ansi" C allows implicit int, but requires a return. So, a return should definitely be there.
Btw, C89 is exactly the same thing as C90. It took a while for both hemispheres of the world to agree. Timezones and meridians and such. It's the same standard.
It does not compile. If you don't tell gcc to be a strictly conforming C compiler but just call it with gcc test.c, then it will remain a completely non-standard compiler which allows a whole lot of weird things. It does not conform to any known C standard when left with its default setting.
gcc -std=c11 -pedantic-errors gives:
test.c:3:1: error: return type defaults to 'int'
main()
^
test.c: In function 'main':
test.c:5:4: error: implicit declaration of function 'sub' [-Wimplicit-function-d
eclaration]
sub ();
^
test.c: At top level:
test.c:9:1: error: return type defaults to 'int'
sub()
^
test.c: In function 'sub':
test.c:14:1: warning: control reaches end of non-void function [-Wreturn-type]
}
#lundin, please note:
main()
is not equivalent to
int main(void)
because void means there are no parameters but () means there can be any number of parameters.
Related
While fiddling about old strange compatibility behaviors of C, I ended up with this piece of code:
#include <stdio.h>
int f();
int m() {
return f();
}
int f(int a) {
return a;
}
int main() {
f(2);
printf("%i\n", m());
}
I'm sure that the call to f() in m() is an undefined behavior as f() should take exactly one argument, but:
on x86, both GCC 9.1 and clang 8.0.1 do not show any warning (nor in -Wextra, -Weverything or whatever) except when using GCC and -O3. The output is then 2 without -O3, 0 with it. On Windows, MSVC doesn't print any error and the program outputs just random numbers.
on ARM (Raspberry Pi 3), GCC 6.3.0 and clang 3.8.1, I observe the same behavior for errors, the option -O3 still outputs 0, but normal compilation leads to 2 with GCC and... 66688 with clang.
When the error message is present, it's pretty much what you would expect: (pretty funny as a is not present in the printed line)
foo.c: In function ‘m’:
foo.c:4:9: warning: ‘a’ is used uninitialized in this function [-Wuninitialized]
return f();
^~~
foo.c: In function ‘main’:
foo.c:11:2: warning: ‘a’ is used uninitialized in this function [-Wuninitialized]
printf("%i\n", m());
My guess is that -O3 leads GCC to inline the calls, thus making it understand that a problem occurs; and that the leftovers on the stack or in the registers are used as if they were the argument to the call. But how can it still compile? Is this really the (un)expected behavior?
The specific rule violated is C 2018 6.5.2.2 (Function calls) 6:
If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double. These are called the default argument promotions. If the number of arguments does not equal the number of parameters, the behavior is undefined.…
Since this is not a constraint, the compiler is not required to produce a diagnostic—the behavior is entirely undefined by the C standard, meaning the standard imposes no requirements at all.
Since the standard imposes no requirements, both ignoring the issue (or failing to recognize it) and diagnosing a problem are permissible.
I know I shouldn't define any function like that. But I like to try the limits, so I wrote some code like that:
#include <stdio.h>
void func(a)
{
printf("%d\n", a);
}
int main()
{
func();
func();
func();
return 0;
}
I really wasn't looking for any outputs, but the one I saw was very odd.
The output is
1
0
0
Why? Where did those 1's or 0's came from?
(I'm using CygWin & Eclipse duo btw.)
Your program is invalid.
gcc 8.2 complains:
$ gcc -Wall -Wextra -pedantic -std=c17 t.c
test.c: In function ‘func’:
test.c:3:6: warning: type of ‘a’ defaults to ‘int’ [-Wimplicit-int]
void func(a)
Since C99, all functions require their arguments to have valid types (there used to be "implicit int" rule - function argument/return types are assumed int if not specified). But your program is not valid in C89 either because you don't actually pass any argument. So what you see is the result of undefined behaviour.
Prior to the 1989 ANSI C standard, the only way to define a function in C did not specify the number and types of any parameters. (This old version of the language is called "K&R C", since it's described in the first edition of The C Programming Language by Kernighan and Ritchie.) You're using an old-style function definition:
void func(a)
{
printf("%d\n", a);
}
Here a is the name of the parameter. It is implicitly of type int, but that information is not used in determining the correctness of a call to func.
C89 added prototypes, which allow you to specify the types of parameters in a way that is checked when the compiler sees a call to the function:
void func(int a)
{
printf("%d\n", a);
}
So in the 1989 version of the C language, your program is legal (in the sense that there are no errors that need to be diagnosed at compile time), but its behavior is undefined. func will probably grab whatever value happens to be in the location in which it expects the caller to place the value of the argument. Since no argument was passed, that value will be garbage. But the program could in principle do literally anything.
The 1999 version of the language dropped the "implicit int" rule, so a is no longer assumed to be of type int even with the old-style definition.
But even the latest 2011 version of C hasn't dropped old-style function definitions and declarations. But they've been officially obsolescent since 1989.
Bottom line: Always use prototypes. Old-style function definitions and declarations are still legal, but there is never (or very rarely) any good reason to use them. Unfortunately your compiler won't necessarily warn you if you use them accidentally.
(You're also using an old-style definition for main. int main() is better written as int main(void). This is less problematic than it is for other functions, since main is not normally called, but it's still a good habit.)
When you don't define type for the variables int is taken as default type by compiler.
warning: type of 'a' defaults to 'int' [-Wimplicit-int]
void func(a)
Hence it is printing indeterminate values which int a contains.
int fn();
void whatever()
{
(void) fn();
}
Is there any reason for casting an unused return value to void, or am I right in thinking it's a complete waste of time?
David's answer pretty much covers the motivation for this, to explicitly show other "developers" that you know this function returns but you're explicitly ignoring it.
This is a way to ensure that where necessary error codes are always handled.
I think for C++ this is probably the only place that I prefer to use C-style casts too, since using the full static cast notation just feels like overkill here. Finally, if you're reviewing a coding standard or writing one, then it's also a good idea to explicitly state that calls to overloaded operators (not using function call notation) should be exempt from this too:
class A {};
A operator+(A const &, A const &);
int main () {
A a;
a + a; // Not a problem
(void)operator+(a,a); // Using function call notation - so add the cast.
At work we use that to acknowledge that the function has a return value but the developer has asserted that it is safe to ignore it. Since you tagged the question as C++ you should be using static_cast:
static_cast<void>(fn());
As far as the compiler goes casting the return value to void has little meaning.
The true reason for doing this dates back to a tool used on C code, called lint.
It analyzes code looking for possible problems and issuing warnings and suggestions. If a function returned a value which was then not checked, lint would warn in case this was accidental. To silence lint on this warning, you cast the call to (void).
Casting to void is used to suppress compiler warnings for unused variables and unsaved return values or expressions.
The Standard(2003) says in §5.2.9/4 says,
Any expression can be explicitly converted to type “cv void.” The expression value is discarded.
So you can write :
//suppressing unused variable warnings
static_cast<void>(unusedVar);
static_cast<const void>(unusedVar);
static_cast<volatile void>(unusedVar);
//suppressing return value warnings
static_cast<void>(fn());
static_cast<const void>(fn());
static_cast<volatile void>(fn());
//suppressing unsaved expressions
static_cast<void>(a + b * 10);
static_cast<const void>( x &&y || z);
static_cast<volatile void>( m | n + fn());
All forms are valid. I usually make it shorter as:
//suppressing expressions
(void)(unusedVar);
(void)(fn());
(void)(x &&y || z);
Its also okay.
Since c++17 we have the [[maybe_unused]] attribute which can be used instead of the void cast.
Cast to void is costless. It is only information for compiler how to treat it.
For the functionality of you program casting to void is meaningless. I would also argue that you should not use it to signal something to the person that is reading the code, as suggested in the answer by David. If you want to communicate something about your intentions, it is better to use a comment. Adding a cast like this will only look strange and raise questions about the possible reason. Just my opinion...
As of C++11 you can also do:
std::ignore = fn();
This should achieve the same result on functions marked with [[nodiscard]]
C++17 [[nodiscard]]
C++17 standardized the "return value ignored business" with an attribute.
Therefore, I hope that compliant implementations will always warn only when nodiscard is given, and never warn otherwise.
Example:
main.cpp
[[nodiscard]] int f() {
return 1;
}
int main() {
f();
}
compile:
g++ -std=c++17 -ggdb3 -O0 -Wall -Wextra -pedantic -o main.out main.cpp
outcome:
main.cpp: In function ‘int main()’:
main.cpp:6:6: warning: ignoring return value of ‘int f()’, declared with attribute nodiscard [-Wunused-result]
6 | f();
| ~^~
main.cpp:1:19: note: declared here
1 | [[nodiscard]] int f() {
|
The following all avoid the warning:
(void)f();
[[maybe_unused]] int i = f();
I wasn't able to use maybe_unused directly on the f() call:
[[maybe_unused]] f();
gives:
main.cpp: In function ‘int main()’:
main.cpp:6:5: warning: attributes at the beginning of statement are ignored [-Wattributes]
6 | [[maybe_unused]] f();
| ^~~~~~~~~~~~~~~~
The (void) cast working does not appear to be mandatory but is "encouraged" in the standard: How can I intentionally discard a [[nodiscard]] return value?
Also as seen from the warning message, one "solution" to the warning is to add -Wno-unused-result:
g++ -std=c++17 -ggdb3 -O0 -Wall -Wextra -pedantic -Wno-unused-result -o main.out main.cpp
although I wouldn't of course recommend ignoring warnings globally like this.
C++20 also allows you to add a reason to the nodiscard as in [[nodiscard("reason")]] as mentioned at: https://en.cppreference.com/w/cpp/language/attributes/nodiscard
GCC warn_unused_result attribute
Before the standardization of [[nodiscard]], and for C before they finally decide to standardize attributes, GCC implemented the exact same functionality with the warn_unused_result:
int f() __attribute__ ((warn_unused_result));
int f() {
return 1;
}
int main() {
f();
}
which gives:
main.cpp: In function ‘int main()’:
main.cpp:8:6: warning: ignoring return value of ‘int f()’, declared with attribute warn_unused_result [-Wunused-result]
8 | f();
| ~^~
It should be noted then that since ANSI C does not have a standard for this, ANSI C does not specify which C standard library functions have the attribute or not and therefore implementations have made their own decisions on what should or not be marked with warn_unuesd_result, which is why in general you would have to use the (void) cast to ignore returns of any calls to standard library functions to fully avoid warnings in any implementation.
Tested in GCC 9.2.1, Ubuntu 19.10.
Also when verifying your code complies to MISRA (or other) standards, static-analysis tools such as LDRA will not allow you to call a function that has a return type without having it return a value unless you explicitly cast the returned value to (void)
I was testing a code and I don't understand why it prints "The higher value is 0".
This is the main function:
int main() {
double a,b,c;
a=576;
b=955;
c=higher(a,b);
printf("The higher value is %g\n", c);
return 0;
}
and in another .c I have this function:
double higher(double a, double b){
if (a>b)
return a;
return b;
}
NOTE: if I put higher() in main.c it works correctly, but in that way it tells me the higher value is 0. It also works if I cast the return in higer() like this:
return (int)b;
Is there a difference if a function that returns a double is in the same .c as main() or in a different one?
Compile with a C99 or C11 compiler and read the warnings. You are using the function without prototype.
Without a prototype, pre-C99 assumes a function to return int by default.
C99 and later, require a prototype.
Even without additional warnings enabled:
$ cat test.c
int main()
{
int i = f();
return 0;
}
int f(void)
{
return 1;
}
$ gcc -std=c11 test.c
test.c: In function ‘main’:
test.c:13:2: warning: implicit declaration of function ‘f’ [-Wimplicit-function-declaration]
int i = f();
Note that gcc will not warn if compiling -std=c90, but will if enabling warnings -Wall.
So, as higher() is expected to return an int, the value is converted to double by the assignment (the type of c is not changed).
And now for the funny part: undefined behaviour (UB, memorize this phrase!) due to different signature for call and implementation of the function.
What might happen is according to procedure call standard (PCS) and the application binary interface (ABI) - check Wikipedia. Briefly: higher itself returns a double. That is likely passed in a floating point CPU register to the caller. The caller, OTOH, expects the return value (due to the missing prototype) in an integer CPU register (which happens to hold 0 by chance).
So, as they apparently have misscommunication, you get the wrong result. Note that this is a bit speculatively and depends on the PCS/ABI. All to remember is this is UB, so anything can happen, even demons flying out of your nose.
Why use prototypes:
Well, you allready noticed, the compiler has no idea, if you call a function correctly. Even worse, it does not know, which argument types are used and which result type is returned. This is particlularily a problem, as C automatically converts some types (which you did encounter here).
As classical K&R (pre-standard) C did not have prototypes, all arguments to unknown functions were assumed int/double for scalar arguments on a call. The result defaults to int. (Long time ago, I might be missing some parts; I started some coding with K&R, messed up types (exactly your problem here, but without a clean solution), etc., threw it in a corner and happily programmed in Modula-2 until some years later I tried ANSI-C).
If compiling code now, you should at least conform to (and compile for) C99, better use the current standard (C11).
I have this following code that executes successfully but how is it possible when function printit is not having data types of passed variables. And then why in output float value is printed correctly but character value is not. I tried even by typecasting it to char.
main( )
{
float a = 15.5 ;
char ch = 'C' ;
printit ( a, ch ) ;
}
printit ( a, ch )
{
printf ( "\n%f %c", a, ch ) ;
}
Its execution gives the result:
15.500000 ╠
In pre-standard C, that was a permitted, though even then not encouraged, way of writing code. There were no prototypes; in the absence of a type specifier, variables were of type int.
If the types were not int, you might write:
print_it_2(a, b, c)
struct dohickey *c;
double b;
{
printf("%d %p %f\n", a, b, c);
}
Note that the variables are listed in one order and defined in another, and a is still an int because it isn't listed at all. The return type of the function was assumed to be int too. However, especially if the function didn't return a value after all (this was before the return type void was supported), people would omit the return type from the declaration and not return a value from the function.
This is no longer good style. If the compiler doesn't complain about it, it should. I compile code (especially for answers on SO) with the extra options:
gcc -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes \
-Wold-style-definition -Werror ll3.c -o ll3
The missing prototypes and strict prototypes and old style definition are about expunging the last vestiges of such old, pre-standard code. I don't always (usually) show those three options in my answers, but they're there when I compile the code. It means you'll see functions made static (to avoid missing prototypes) and int main(void) instead of int main() (to avoid strict prototypes), and old style definition would shout about the functions discussed in the question.
The second part of the question is 'why did the float value get printed correctly and the character not'.
One of the ways in which non-prototype functions vary is that automatic promotions apply. Any type smaller than int (so char or short) is promoted to int, and float is promoted to double (simplifying slightly). So, in the call printit(a, ch), the values actually passed are a double and an int. Inside the function, the code thinks that a is an int (for concreteness, a 4 byte int), and ditto for ch. These two 4 byte quantities are copied onto the stack (again, simplifying slightly), and passed to printf(). Now, printf() is a variadic function (takes a variable number of arguments), and the arguments passed as part of the ellipsis ... are automatically promoted too. So inside printf(), it is told there are 8 bytes on the stack containing a double value, and sure enough, the calling code passed 8 bytes (albeit as two 4-byte units), so the double is magically preserved. Then printf() is told that there's an extra int on the stack, but the int was not put there by the calling code, so it reads some other indeterminate value and prints that character.
This sort of mess is what prototypes prevent. That's why you should never code in this style any more (and that has been true for all of this millennium and even for the latter half of the last decade of the previous millennium).
Peter Schneider asks in a comment:
Can you point to the relevant standard which allows or forbids undeclared parameters? If it is indeed forbidden, why does gcc not conform to the standard when asked to?
Taking that code and compiling with GCC 4.8.2 on Mac OS X 10.9.2, I get:
$ gcc -std=c11 -c xyz.c
xyz.c:1:1: warning: return type defaults to ‘int’ [enabled by default]
print_it_2(a, b, c)
^
xyz.c: In function ‘print_it_2’:
xyz.c:1:1: warning: type of ‘a’ defaults to ‘int’ [enabled by default]
xyz.c:5:5: warning: implicit declaration of function ‘printf’ [-Wimplicit-function-declaration]
printf("%d %p %f\n", a, b, c);
^
xyz.c:5:5: warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]
$
The same warnings are given with -std=c99. However:
$ gcc -std=c89 -c xyz.c
xyz.c: In function ‘print_it_2’:
xyz.c:5:5: warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]
printf("%d %p %f\n", a, b, c);
^
$
And by providing #include <stdio.h>, even that would be quelled.
Section 6.9.1 of ISO/IEC 9899:2011 covers function definitions (and it's the same section number in C99). However, both of these rule out the 'implicit int' behaviour that C89 had to permit (because an awful lot of pre-existing code hadn't heard of the rules.
However, even C11 allows for the old style definition (with complete types):
function-definition:
declaration-specifiers declarator declaration-listopt compound-statement
declaration-list:
declaration
declaration-list declaration
The 'declaration-listopt' is the list of types between the function declarator and the compound statement that is the function body.
C11 also says:
§6 If the declarator includes an identifier list, each declaration in the declaration list shall
have at least one declarator, those declarators shall declare only identifiers from the
identifier list, and every identifier in the identifier list shall be declared.
I can't immediately find my copy of 'The Annotated C Standard', a book which contains useful information (the 1989/1990 C standard) on the left hand page and frequently less-than-useful information on the right hand page of each spread. This would allow me to quote the standard, but it is AWOL at the moment.
C89 would have been less stringently worded in this paragraph §6.
As to why gcc doesn't reject the sloppily written old-style code, the answer is still 'backwards compatibility'. There is still old code out there, and it allows it to compile when it can, unless its hands are carefully tied behind its back by the use of -Werror.