Ambiguity of return values - c

In the C language book (by K&R), in the section on low level I/O , I came across two functions read() and close() both of which have an integer return type. But I have seen that they are being used without even caring to assign the return value to any integer variable. But when I create a user defined function having integer return type and use it without assigning it to integer variable it causes compiler warning. Why this inconsistency?

Compilers traditionally don't warn for omitting the result of library function calls. Functions like printf, scanf and memcpy do return something, yet someone back in the dark ages of K&R decided to implicitly skip checking the result of the functions. It became de facto standard. Although to this day, skipping the result remains bad practice in many cases (like in the case of scanf).
Compilers do warn if you don't check the result of application functions though, because that's almost always a bug. If you deliberately don't want to check the result, you should write (void) func(); to silence such warnings.
(Side note: read and close aren't standard C, but Unix API. Still they are library functions.)

Related

Why does C let you ignore returned values from functions?

Let's say I have the following C code:
int return_one()
{
return 1;
}
int main(void)
{
return_one();
}
Within the main function, I call function return_one() and ignore the return value. The compiler has no issue with me ignoring this value.
What is the logic as to why this okay? Was it an arbitrary design choice from the C creators? Or is there a practical reason for not requiring the calling function to use the return value?
I think the main reason is the usual one — history.
Before the C standard, there was no option to use void to indicate 'no return value'. Functions returned an int unless you specified that they returned some other type, but that other type couldn't be void (it didn't exist). So functions for which the return value was immaterial didn't return a value — even though the function was implicitly returning type int. (Usually, the return type was omitted — the function was implicitly returning an int.) You got UB if the calling code tried to use a value but the called function didn't return a value.
All this meant that it was commonplace to ignore the return value of functions — especially functions that nominally returned an int but actually didn't return any value. There wasn't a better way of dealing with it. Nowadays, with void return types, there are better ways to deal with it. Nevertheless, it remains true that the return value is often of limited interest. How often do you check the return value of printf() or one of its friends? How often do you use the return value of strcpy() et al?
C90 had to allow old code to run still, so it allowed the old, pre-standard behaviour. C99 tightened the rules — functions were no longer implicitly of type int and had to be declared (with an explicit return type, possibly void) before they could be used.
Because people don't care about a lot of return values, so why force them to use them?
A ton of standard C functions return values that are essentially never looked at. Think about all the C code you've ever looked at. Have you ever seen someone do anything with the return value from the printf family of functions? Because they all have one, but you'd never know it to look at real world code. Would it be improved if every single call to printf had to prefix it with an explicit "I don't care about the return value" bit of syntax (e.g. (void)), because 99.99% of the time, you don't actually care how many bytes you printed, but printf computes and returns it anyway? Basically, you're allowed to not use the return value because there's was no need to force you to do so, and it's often not needed.
tl;dr: You can write code such that the return values are always relevant - in that case ignoring them would be a bug. However, some (particularly older) code does not work that way, there ignoring return values may be reasonable.
What is the logic as to why this okay? Was it an arbitrary design
choice from the C creators?
It is not just a choice by the C creators, many other languages also allow this - as a matter of fact, I cannot think of a language where ignoring the return value is considered an error.
The practical motivation for this is mainly that the return value of a function is often used for reporting errors, or reporting details of what the function did. Sometimes, you are not interested in these details, so you ignore them. A common example of this is the C function printf, which returns the number of characters printed. While this may sometimes be useful, it is usually not, so the return value of printf is usually ignored.
This is arguably a bad design (either in your code, or in the function that returns stuff noone wants), but it is established practice, so C (like other languages) supports this.
Or is there a practical reason for not requiring the calling function to use the return value?
No, there is no practical reason for ignoring return values - unless you do not want them.
While the above is the historical practice, many people now consider ignoring return values a problem (or a symptom of a deeper problem):
If the return value is for error reporting, ignoring it will work usually, but cause problems if there really is an error. This is now usually considered a bad idea.
Generally, if you can ignore the return value, that means the function is causing side-effects (otherwise calling it would be pointless). Many people think that it is better to have functions only do one thing - either cause a side effect (and return nothing), or return a value (and have no side effects). The latter is often called a pure function (with the additional condition of the output only depending on the input). This separation makes it easier to understand software - and if you use it, ignoring a return value is necessarily a mistake, because functions returning a result do nothing else, so if you ignore the result, calling it is pointless.
So in other words: If you follow certain conventions, there should be no situation where you want to ignore return values, in that case ignoring them is usually a mistake. Without these conventions, there may be good reasons for ignoring them, so there's no general rule.

Do I really need to include string.h? [duplicate]

What will happen if I don't include the header files when running a c program? I know that I get warnings, but the programs runs perfectly.
I know that the header files contain function declarations. Therefore when I don't include them, how does the compiler figure it out? Does it check all the header files?
I know that I get warnings, but the programs runs perfectly.
That is an unfortunate legacy of pre-ANSI C: the language did not require function prototypes, so the standard C allows it to this day (usually, a warning can be produced to find functions called without a prototype).
When you call a function with no prototype, C compiler makes assumptions about the function being called:
Function's return type is assumed to be int
All parameters are assumed to be declared (i.e. no ... vararg stuff)
All parameters are assumed to be whatever you pass after default promotions, and so on.
If the function being called with no prototype fits these assumptions, your program will run correctly; otherwise, it's undefined behavior.
Before the 1989 ANSI C standard, there was no way to declare a function and indicate the types of its parameters. You just had to be very careful to make each call consistent with the called function, with no warning from the compiler if you got it wrong (like passing an int to sqrt()). In the absence of a visible declaration, any function you call was assumed to return int; this was the "implicit int" rule. A lot of standard functions do return int, so you could often get away with omitting a #include.
The 1989 ANSI C standard (which is also, essentially, the 1990 ISO C standard) introduced prototypes, but didn't make them mandatory (and they still aren't). So if you call
int c = getchar();
it would actually work, because getchar() returns an int.
The 1999 ISO C standard dropped the implicit int rule, and made it illegal (actually a constraint violation) to call a function with no visible declaration. So if you call a standard function without the required #include, a C99-conforming compiler must issue a diagnostic (which can be just a warning). Non-prototype function declarations (ones that don't specify the types of the arguments) are still legal, but they're considered obsolescent.
(The 2011 ISO C standard didn't change much in this particular area.)
But there's still plenty of code out there that was written for C90 compilers, and most modern compilers still support the older standard.
So if you call a standard function without the required #include, what will probably happen is that (a) the compiler will warn you about the missing declaration, and (b) it will assume that the function returns int and takes whatever number and type(s) of arguments you actually passed it (also accounting for type promotion, such as short to int and float to double). If the call is correct, and if you compiler is lenient, then your code will probably work -- but you'll have one more thing to worry about if it fails, perhaps for some unrelated reason.
Variadic functions like printf are another matter. Even in C89/C90, calling printf with no visible prototype had undefined behavior. A compiler can use an entirely different calling convention for variadic functions, so printf("hello") and puts("hello") might generate quite different code. But again, for compatibility with old code, most compilers use a compatible calling convention, so for example the first "hello world" program in K&R1 will probably still compile and run.
You can also write your own declarations for standard functions; the compiler doesn't care whether it sees a declaration in a standard header or in your own source file. But there's no point in doing so. Declarations have changed subtly from one version of the standard to the next, and the headers that came with your implementation should be the correct ones.
So what will actually happen if you call a standard function without the corresponding #include?
In a typical working environment, it doesn't matter, because with any luck your program won't survive code review.
In principle, any compiler that conforms to C99 or later may reject your program with a fatal error message. (gcc will behave this way with -std=c99 -pedantic-errors) In practice, most compilers will merely print a warning. The call will probably work if the function returns int (or if you ignore the result) and if you get all the argument types correct. If the call is incorrect, the compiler may not be able to print good diagnostics. If the function doesn't return int, the compiler will probably assume that it does, and you'll get garbage results, or even crash your program.
So you can study this answer of mine, follow up by reading the various versions of the C standard, find out exactly which edition of the standard your compiler conforms to, and determine the circumstances in which you can safely omit a #include header -- with the risk that you'll mess something up next time you modify your program.
Or you can pay attention to your compiler's warnings (Which you've enabled with whatever command-line options are available), read the documentation for each function you call, add the required #includes at the top of each source file, and not have to worry about any of this stuff.
First of all: just include them.
If you don't the compiler will use the default prototype for undeclared functions, which is:
int functionName(int argument);
So it will compile, and link if the functions are available. But you will have problems at runtime.
There are a lot of things you can't do if you leave out headers:
(I'm hoping to get some more from the comments since my memory is failing on this ...)
You can't use any of the macros defined in the headers. This can be significant.
The compiler can't check that you are calling functions properly since the headers define their parameters for it.
For compatibility with old program C compilers can compile code calling functions which have not been declared, assuming the parameters and return value is of type int. What can happen? See for example this question: Troubling converting string to long long in C I think it's a great illustration of the problems you can run into if you don't include necessary headers and so don't declare functions you use. What happened to the guy was he tried to use atoll without including stdlib.h where atoll is declared:
char s[30] = { "115" };
long long t = atoll(s);
printf("Value is: %lld\n", t);
Surprisingly, this printed 0, not 115, as expected! Why? Because the compiler didn't see the declaration of atoll and assumed it's return value is an int, and so picked only part of the value left on stack by the function, in other words the return value got truncated.
That's why of the reasons it is recommended to compile your code with -Wall (all warnings on).

What will happen if I don't include header files

What will happen if I don't include the header files when running a c program? I know that I get warnings, but the programs runs perfectly.
I know that the header files contain function declarations. Therefore when I don't include them, how does the compiler figure it out? Does it check all the header files?
I know that I get warnings, but the programs runs perfectly.
That is an unfortunate legacy of pre-ANSI C: the language did not require function prototypes, so the standard C allows it to this day (usually, a warning can be produced to find functions called without a prototype).
When you call a function with no prototype, C compiler makes assumptions about the function being called:
Function's return type is assumed to be int
All parameters are assumed to be declared (i.e. no ... vararg stuff)
All parameters are assumed to be whatever you pass after default promotions, and so on.
If the function being called with no prototype fits these assumptions, your program will run correctly; otherwise, it's undefined behavior.
Before the 1989 ANSI C standard, there was no way to declare a function and indicate the types of its parameters. You just had to be very careful to make each call consistent with the called function, with no warning from the compiler if you got it wrong (like passing an int to sqrt()). In the absence of a visible declaration, any function you call was assumed to return int; this was the "implicit int" rule. A lot of standard functions do return int, so you could often get away with omitting a #include.
The 1989 ANSI C standard (which is also, essentially, the 1990 ISO C standard) introduced prototypes, but didn't make them mandatory (and they still aren't). So if you call
int c = getchar();
it would actually work, because getchar() returns an int.
The 1999 ISO C standard dropped the implicit int rule, and made it illegal (actually a constraint violation) to call a function with no visible declaration. So if you call a standard function without the required #include, a C99-conforming compiler must issue a diagnostic (which can be just a warning). Non-prototype function declarations (ones that don't specify the types of the arguments) are still legal, but they're considered obsolescent.
(The 2011 ISO C standard didn't change much in this particular area.)
But there's still plenty of code out there that was written for C90 compilers, and most modern compilers still support the older standard.
So if you call a standard function without the required #include, what will probably happen is that (a) the compiler will warn you about the missing declaration, and (b) it will assume that the function returns int and takes whatever number and type(s) of arguments you actually passed it (also accounting for type promotion, such as short to int and float to double). If the call is correct, and if you compiler is lenient, then your code will probably work -- but you'll have one more thing to worry about if it fails, perhaps for some unrelated reason.
Variadic functions like printf are another matter. Even in C89/C90, calling printf with no visible prototype had undefined behavior. A compiler can use an entirely different calling convention for variadic functions, so printf("hello") and puts("hello") might generate quite different code. But again, for compatibility with old code, most compilers use a compatible calling convention, so for example the first "hello world" program in K&R1 will probably still compile and run.
You can also write your own declarations for standard functions; the compiler doesn't care whether it sees a declaration in a standard header or in your own source file. But there's no point in doing so. Declarations have changed subtly from one version of the standard to the next, and the headers that came with your implementation should be the correct ones.
So what will actually happen if you call a standard function without the corresponding #include?
In a typical working environment, it doesn't matter, because with any luck your program won't survive code review.
In principle, any compiler that conforms to C99 or later may reject your program with a fatal error message. (gcc will behave this way with -std=c99 -pedantic-errors) In practice, most compilers will merely print a warning. The call will probably work if the function returns int (or if you ignore the result) and if you get all the argument types correct. If the call is incorrect, the compiler may not be able to print good diagnostics. If the function doesn't return int, the compiler will probably assume that it does, and you'll get garbage results, or even crash your program.
So you can study this answer of mine, follow up by reading the various versions of the C standard, find out exactly which edition of the standard your compiler conforms to, and determine the circumstances in which you can safely omit a #include header -- with the risk that you'll mess something up next time you modify your program.
Or you can pay attention to your compiler's warnings (Which you've enabled with whatever command-line options are available), read the documentation for each function you call, add the required #includes at the top of each source file, and not have to worry about any of this stuff.
First of all: just include them.
If you don't the compiler will use the default prototype for undeclared functions, which is:
int functionName(int argument);
So it will compile, and link if the functions are available. But you will have problems at runtime.
There are a lot of things you can't do if you leave out headers:
(I'm hoping to get some more from the comments since my memory is failing on this ...)
You can't use any of the macros defined in the headers. This can be significant.
The compiler can't check that you are calling functions properly since the headers define their parameters for it.
For compatibility with old program C compilers can compile code calling functions which have not been declared, assuming the parameters and return value is of type int. What can happen? See for example this question: Troubling converting string to long long in C I think it's a great illustration of the problems you can run into if you don't include necessary headers and so don't declare functions you use. What happened to the guy was he tried to use atoll without including stdlib.h where atoll is declared:
char s[30] = { "115" };
long long t = atoll(s);
printf("Value is: %lld\n", t);
Surprisingly, this printed 0, not 115, as expected! Why? Because the compiler didn't see the declaration of atoll and assumed it's return value is an int, and so picked only part of the value left on stack by the function, in other words the return value got truncated.
That's why of the reasons it is recommended to compile your code with -Wall (all warnings on).

Why does the standard let functions that don't return compile?

f() does not return even though it's signature say it should.
Why is the reason for allowing this to compiling?
Is there a reason the C standard does not require the compiler to make it fail?
I know that it is Undefined behavior and all, but why is it allowed in the first place?
Is there a historical reason?
double f(){}
int main()
{
f();
return 0;
}
Is there a reason the C standard does not require the compiler to make
it fail?
By invoking undefined behavior, the C standard allowed the compilers to be less complicated. There is indeed some cases, such as if statements, in which it is hard to say whether the function returns a value or not:
int f(int n)
{
if (n > 0) return 1;
}
If I write f(5), it is easy for the compiler to say that the
function is correct.
If I write f(-5), it is also easy to detect
an undefined return value.
But if the argument comes from user input for example, how should the compiler be able to know whether the function returns a value? Since this could both a valid or a wrong program, C standard allows the compilers to do what they want. C is designed to be as smart and simple as possible.
The compiler could certainly analyze all code paths for the function and reject the program if it cannot prove that the function always returns a meaningful value. I suppose the reason the standard does not mandate it is that in the old days compilers were much less sophisticated than we work with today.
Of course using the return value of such a function is undefined behavior:
If the } that terminates a function is reached, and the value of the
function call is used by the caller, the behavior is undefined.
It's easy to tell that your function will reach the end without returning a value. It doesn't return and it doesn't call any code that could prevent it reaching the end (like abort()).
In fact your program does not have undefined behavior in C99, since the missing return value isn't used by the caller. 6.9.1/12:
If the } that terminates a function is reached, and the value of the
function call is used by the caller, the behavior is undefined.
So your code has questionable style, but is well-defined.
The C++ standard changes the rule and remarks on that change in [diff.stat]. It says that the C version of the rule is to support code that was written back in the days when C didn't distinguish between int return and void return. So the reason your code has defined behavior in the first place is "legacy". That said, AFAIK C has always distinguished between double return and int return, so it could probably have made it UB for a function returning double to fall off the end, had it been done at the right time.
Leaving aside whether the return value is used, consider a tricker function:
double f() {
if (g()) exit();
}
This function also contains no return statements, but doesn't reach the end of the function if in fact g always returns a true value or doesn't return at all. So this function should be accepted even if its return value is used, on the general C standard principle that you're expected to know what you're doing and mean what you say. If g is defined in a different TU then you probably know more about it than the compiler does.
Even if it weren't for the legacy reasons, I'm pretty sure that the standard simply cannot be bothered adding text in order to define what non-return scenarios compilers are required to detect. It's left to quality of implementation -- if it can be determined at compile time that your function cannot possibly avoid UB then maybe the compiler will warn anyway despite no diagnostic being required. For that matter, it will occasionally warn when behavior is defined on the general C implementer's principle that some things are so daft that no user could reasonably mean them.
Because the compiler can not tell if the function returns at runtime or not.

Confused over function call in pre-ANSI C syntax

I'm dealing with some pre-ANSI C syntax. See I have the following function call in one conditional
BPNN *net;
// Some more code
double val;
// Some more code, and then,
if (evaluate_performance(net, &val, 0)) {
But then the function evaluate_performance was defined as follows (below the function which has the above-mentioned conditional):
evaluate_performance(net, err)
BPNN *net;
double *err;
{
How come evaluate_performance was defined with two parameters but called with three arguments? What does the '0' mean?
And, by the way, I'm pretty sure that it isn't calling some other evaluate_performance defined elsewhere; I've greped through all the files involved and I'm pretty sure the we are supposed to be talking about the same evaluate_performance here.
Thanks!
If you call a function that doesn't have a declared prototype (as is the case here), then the compiler assumes that it takes an arbitrary number and types of arguments and returns an int. Furthermore, char and short arguments are promoted to ints, and floats are promoted to doubles (these are called the default argument promotions).
This is considered bad practice in new C code, for obvious reasons -- if the function doesn't return int, badness could ensure, you prevent the compiler from checking that you're passing the correct number and types of parameters, and arguments might get promoted incorrectly.
C99, the latest edition of the C standard, removes this feature from the language, but in practice many compilers still allow them even when operating in C99 mode, for legacy compatibility.
As for the extra parameters, they are technically undefined behavior according to the C89 standard. But in practice, they will typically just be ignored by the runtime.
The code is incorrect, but in a way that a compiler is not required to diagnose. (A C99 compiler would complain about it.)
Old-style function definitions don't specify the number of arguments a function expects. A call to a function without a visible prototype is assumed to return int and to have the number and type(s) of arguments implied by the calls (with narrow integer types being promoted to int or unsigned int, and float being promoted to double). (C99 removed this; your code is invalid under the C99 standard.)
This applies even if the definition precedes the call (an old-style definition doesn't provide a prototype).
If such a function is called incorrectly, the behavior is undefined. In other words, it's entirely the programmer's responsibility to get the arguments right; the compiler won't diagnose errors.
This obviously isn't an ideal situation; it can lead to lots of undetected errors.
Which is exactly why ANSI added prototypes to the language.
Why are you still dealing with old-style function definitions? Can you update the code to use prototypes?
Even standard C compilers are somewhat permissive when it comes to this. Try running the following:
int foo()
{
printf("here");
}
int main()
{
foo(3,4);
return 0;
}
It will, to some's surprise, output "here". The extra arguments are just ignored. Of course, it depends on the compiler.
Overloading doesn't exist in C so having 2 declarations would not work in the same text.
That must be a quite old compiler to not err on this one or it did not find the declaration of the function yet!
Some compilers would not warn/err when calling an undefined function. That's probably what you're running into. I would suggest you look at the command line flags of the compiler to see whether there is a flag you can use to get these warnings because you may actually find quite a few similar mistakes (too many parameters is likely to work just fine, but too few will make use of "undefined" values...)
Note that it is possible to do such (add extra parameters) when using the ellipsis as in printf():
printf(const char *format, ...);
I would imagine that the function had 3 parameters at some point and the last was removed because it was unused and some parts of the code was not corrected as it ought to be. I would remove that 3rd parameter, just in case the stack goes in the wrong order and thus fails to send the correct parameters to the function.

Resources