Check 'puts' return value, then what? - c

i'm starting to make code checking the function's return value, but i'm not sure on how to procede after I capture some errors.
For example, in fgets:
while( fgets( rta, 3, stdin ) == NULL ) {
printf( "Ocurrio un error al evaluar su respuesta. Intente nuevamente./n" );
}
But for puts ?
It will return EOF on error, so I make:
if( puts( "Message" ) == EOF ) {
error handle...
}
Question is, what I'm supossed to do if it fails. I would think in displaying a message in the console (this is a console app) but if puts fails, then my message would also fail.
(because I would also use puts).
Should I use assert to display a message and end the app?
Thanks a lot.

Unless you have some desperate need to communicate with someone/something that is no longer available, you typically ignore the return value of such functions.
If you really need a record of your progress or failure, then open a log file and write all of your messages there (or use something like syslog).
Basically, if you're using stdio, then it is likely someone at some point with either hook your program to a pipe (e.g. | less) or do some other such thing that will lead to stdout going away.
Programs like lint will complain when you ignore the return value of printf and puts, but the truth is 99% of those return values are useless except in extreme cases.

Well, as they say, If there's any doubt, there is no doubt.
It's very good of you to incorporate disciplined thinking like making sure all cases are accounted for.
But if you're unsure what to do with the return value, then clearly you don't need to do anything with it.
To communicate this decision in code, you may explicitly discard the return value by casting the expression to void.
(void)puts("");
This will hush-up any warning messages about ignoring the return value.

Related

ignoring return value (C Programming) [duplicate]

This question already has answers here:
Warning: ignoring return value of 'scanf', declared with attribute warn_unused_result
(11 answers)
Closed 4 years ago.
I just started learning C programming a few days back and I'm trying out some problems from the Kattis (open.kattis.com) website. I came up with this problem along the way where I don't really understand what it means.
//two stones
#include <stdio.h>
int main()
{
int n,x=2;
printf("The number of stones placed on the ground is ");
scanf("%d",&n);
if (n%x != 0)
{
printf("The winner of the game is Alice! \n");
}
else
{
printf("The winner of the game is Bob! \n");
}
return 0;
}
This appeared >>
warning: ignoring return value of scanf, declared with attribute warn_unused_result [-Wunused-result]
regarding scanf("%d",&n);
Can anyone explain what's wrong with this and how to rectify this problem? Thanks.
scanf has a return value that indicates success:
C Standard; §7.19.6.4.3:
The scanf function returns the value of the macro EOF if an input failure occurs before
any conversion. Otherwise, the scanf function returns the number of input items
assigned, which can be fewer than provided for, or even zero, in the event of an early
matching failure.
If you have a format string in your call to scanf that has one format specifier, then you can check that scanf succeeded in receiving an input of that type from the stdin by comparing its return value to 1.
Your compiler is warning you about this not specifically because scanf returns a value, but because it's important to inspect the result of scanf. A standard-compliant implementation of printf, for example, will also return a value (§7.19.6.3.3), but it's not critical to the soundness of your program that you inspect it.
You are ignoring the return value of the scanf call.
That is what the compiler warns about and was told to treat as an error.
Please understand that there are many subtle mistakes possible to be done with scanf() and not caring about the success, which is indicated by the return value.
To hide the problem which the compiler kindly notifies you about, I recommend to first try the "obvious" straight forward approach
int IreallyWantToIgnoreTheImportantInfo;
/* ... */
IreallyWantToIgnoreTheImportantInfo = scanf("%d",&n);
This will however only move the problem somewhere else and the valid reason about ignoring the scanf() return value will then probably (or maybe "hopefully") turn into a "variable set but never used" warning.
The proper way to really solve the problem here is to USE the return value.
You could e.g. make a loop, which attempts reading user input (giving an explanation and removing unscanned attempts) until the return value indicates success.
That would probably make much better code, at least much more robust.
If you really really want to ignore, without instead ignoring a variable which contains the info, then try
(void) scanf("%d",&n); /* I really really do not care ... */
But, please take that as completly helpfuly as I mean it, that is not a good idea.
Can someone explain to me what's wrong with this and how to rectify this problem?
Many C functions return values to their callers. C does not require the caller to acknowledge or handle such return values, but usually, ignoring return values constitutes a program flaw. This is because ignoring the return value usually means one of these things is happening:
the function was called in order to obtain its return value, so failing to do anything with that value is an error in itself, or
the function was called primarily for its side effects, but its return value, which conveys information about the function's success in producing those side effects, was ignored. Occasionally the caller really doesn't care about the function's [degree of] success, but usually, ignoring the return value means the program is assuming complete success, such that it will malfunction if that was not actually achieved.
scanf() is ordinarily called for its side effects: reading formatted data from the standard input and recording it in the specified objects. Its return value indicates how many of the given input fields were successfully processed, and if the end of the stream or an I/O error was encountered before parsing any fields, then the return value indicates that, too, via a special return value.
If you do not verify that scanf read all the fields you expected it to do, then you do not know whether it gave you any data to work with, nor can you be confident about the state of the input. For example, suppose that when you run your program, you enter "x" instead of a number. What do you think it will do?
You appear to be using GCC and GLIBC. These are instrumented to produce warnings by default when the return values of certain functions, including scanf, are ignored. This catches many of the more common cases of such flaws. To avoid such warnings, check the return value (appropriately). For example,
if (scanf("%d",&n) != 1) {
fputs("Invalid input -- aborting.\n", stderr);
exit(1);
}
What happen is that your compiler is configured to warn you if you don't check the value returned by scanf.
You have many way to "fix" this :
you can configure your compiler to stop warning you. This is usually a bad idea, but since you're still learning C, it may be counterproductive to focus yourself on the error checking at this step.
You can put the result of scanf in a variable .... and do nothing with it. It will probably fool the compiler. Same as previous, not a good idea ...
You can actually do the error check of scanf. It will be probably confusing since you're learning C, but at last it will be a very good habit to have : each time you call a function that may fail, check if it succeed or failed. To do that, you will have to read the scanf manual : Don't try to read it all, you will probably have an headache before the end. Juste read the "Return Value" section, it's enougth.
Good luck !
What your compiler is warning you about is this:
You are reading input (which you don't control) with scanf(). You tell scanf() that you expect an integer "%d", and scanf() is supposed to place the result of the conversion into the variable you supplied with &n.
Now, what happens when your user does not input an integer, but says "evil message" instead? Well, scanf() cannot convert that into an integer. Accordingly, your variable n will not be initialized. And the only way for your program to realize that something went wrong, is to check what scanf() returns. If it returns that it has made 1 successful conversions, everything's ok. If it returns some other value, you know that some garbage was input into your program. (Usually you would want to bail out with a descriptive error message in case of an error, but details depend on the context)
Failures to handle input errors are among the easiest to exploit security vulnerabilities, and the developers of your compiler know this. Thus, they think that it's generally a really bad idea to ignore the return value of scanf() as it's the only way to catch input errors with scanf(). And they conveniently tell you about this. Try to follow their advise, and make sure that you actually handle the errors that may occur, or prove that they are safe to ignore.

Should I check every single parameter of a function to make sure the function works well?

I notice that the standard c library contains several string functions that don't check the input parameter(whether it's NULL), like strcmp:
int strcmp(const char *s1, const char *s2)
{
for ( ; *s1 == *s2; s1++, s2++)
if (*s1 == '\0')
return 0;
return ((*(unsigned char *)s1 < *(unsigned char *)s2) ? -1 : +1);
}
And many others do not do the same validation. Is this a good practice?
In other library, I saw they check every single parameter, like this:
int create_something(int interval, int mode, func_t cb, void *arg, int id)
{
if (interval == 0) return err_code_1;
if (valid(mode)) return err_code_2;
if (cb == NULL) return err_code_3;
if (arg == NULL) return err_code_4;
if (id == 0) return err_code_5;
// ...
}
Which one is better? When you design an API, would you check all parameters to make it function well or just let it go crash?
I'd like to argue that not checking pointers for NULL in library functions that expect valid pointers is actually better practice than to do error returns or silently ignoring them.
NULL is not the only invalid pointer. There are billions of other pointer values that are actually incorrect, why should we give preferential treatment to just one value?
Error returns are often ignored, misunderstood or mismanaged. Forgetting to check one error return could lead to a misbehaving program. I'd like to argue that a program that silently misbehaves is worse than a program that doesn't work at all. Incorrect results can be worse than no results.
Failing early and hard eases debugging. This is the biggest reason. An end user of a program doesn't want the program to crash, but as a programmer I'm the end user of a library and I actually want it to crash. Crashing makes it evident that there's a bug I need to fix and the faster we hit the bug and the closer the crash is to the source of the bug, the faster and easier I can find it and fix it. A NULL pointer dereference is one of the most trivial bugs to catch, debug and fix. It's much easier than trawling through gigabytes of logs to spot one line that says "create_something had a null pointer".
With error returns, what if the caller catches that error, returns an error itself (in your example that would be err_create_something_failed) and its caller returns another error (err_caller_of_create_something_failed)? Then you have an error return 3 functions away, that might not even indicate what actually went wrong. And even if it manages to indicate what actually went wrong (by having a whole framework for error handling that records exactly where the error happened through the whole chain of callers) the only thing you can do with it is to look up the error value in some table and from that conclude that there was a NULL pointer in create_something. It's a lot of pain when instead you could just have opened a debugger and seen exactly where the assumption was violated and what exact chain of function calls lead to that problem.
In the same spirit you can use assert to validate other function arguments to cause early and easy to debug failures. Crash on the assert and you have the full correct call chain that leads to the problem. I just wouldn't use asserts to check pointers because it's pointless (at least on an operating system with memory management) and makes things slower while giving you the same behavior (minus the printed message).
You can use assert.h to check your parameters:
assert(pointer != NULL);
That will make the program fail on debug mode if 'ponter == NULL', but there will be no check at all on release so you can check everything you want with no performance hit.
Anyway if a function requires parameters within a range checking that is a waste of resources, it is the user of your API who should do the checks.
But is up to you how you want to design the API. There is no correct way on that matter: if a function expects a number between 1 and 5 and the user pass a 6 you can perform a check or simply specify that the function will have undefined behaviour.
There is no universally correct way to perform argument validation. In general, you should use assert when you can to validate arguments, but assert is usually disabled in non-debug builds and might not always be appropriate.
There are several things to consider that can vary from case to case, such as:
Do you expect your function to be called a lot? Is performance critical? If a caller will be invoking your function many, many times in a tight loop, then validating arguments can be expensive. This is especially bad for inline functions and if the runtime cost of the validation checks dwarfs the runtime cost of the rest of your function.
Are the checks easy for the caller to perform? If the checks are non-trivial, then it's less error-prone to do validation in the function itself than forcing the extra work on the callers. Note that in some cases, callers might not even be able to perform proper validation themselves (for example, if there's a possibility of a race condition in checking the argument's validity).
Is your function well documented? Does it clearly describe its preconditions, specifying what valid values for its arguments are? If so, then you usually should consider it the caller's responsibility to pass valid arguments.
Is your function self-documenting? Is it obvious to callers what valid arguments are?
Should passing a bad argument be a logic error or a runtime error? That is, should it be considered a programmer's mistake? Is it likely that the argument could come directly from user input? You should consider how you expect callers to use your function. If assertions are enabled, should a bad argument be fatal and terminate the program?
Who are your function's users? Is your function going to be used internally (where you might have some expectation for the competence of other programmers using it), or is it going to be exposed to the general public? If the latter, what failure mode will minimize the amount of technical support that you need to provide? (The stance I took with my dropt library is that I relied on assertions to validate arguments to internal functions and reported error codes for public functions.)
I notice that the standard c library contains several string functions that don't check the input parameter(whether it's NULL), like strcmp:
The string handle functions of the standard C library require "... pointer arguments on such a call shall still have valid values ..." C11dr §7.24.1 2
NULL is not a valid pointer to a string and there is no requirement on the function's part to check pointer validity, so no NULL check.
C's performance does come at a price.
When you design an API, would you check all parameters to make it function well or just let it go crash?
This enters design philosophy. Consider a simpler example. Should the API test the input parameters first? It depends on coding goals.
int add(int a, int b) {
return a + b;
}
// return 1 on failure
int add_safe(int *sum, int a, int b) {
if (a >= 0) {
if (b > INT_MAX - a) return 1; // Overflow
} else {
if (b < INT_MIN - a) return 1; // Underflow
} if (sum) { // NULL check
*sum = a + b;
}
return 0;
}
When in doubt, create an API that does nominal checking. Use strong checking of inputs that may originate from a user (human) or another process. If you want heavy pedantic checking, C is not an efficient target language choice.
With many an API I have made, NULL is a valid input value as a pointer and code adjusts its functionality base on that as above. A NULL check is made, but it is not an error check.

Is it recommended/ appreciable to use infinite loops in C programming

Here is an example. For small programs that's fine to use , but what if we are developing a real time project or Application . Need some suggestions
while (TRUE )
{
int temp =0 ;
printf ( "How many no's would you like to enter : " ) ;
temp = scanf ( "%d" , &n ) ;
if ( temp==1 )
break ;
else
{
printf ("Invalid input. Try again. \n" ) ;
fflush ( stdin ) ;
}
}
The trouble with any loops, be they while(TRUE) or while(condition), is their tendency to have a sneak infinite loop condition - like this one.
OP's code depends on 2 results of scanf("%d",...,: 1 and not 1.
If user enters "123", all is good. scanf() returns 1, loop exits, we all leave work and have a pint.
If user enters "abc", scanf() returns 0, code does a fflush(stdin) to empty the stdin. (This is really UB, but let us pretend it works.) Code loops, prompts again, our drinks get warm, but hopefully we will eventually enter digits.
But let us imagine the user closed stdin - maybe code had re-directed input and scanf() eventually returns EOF. Code loops, but fflush(stdin) does not re-open stdin, and scanf() again returns EOF - and again and again - true infinite loop - code will not pause for input and just says ""Invalid input. Try again." which translates into "dumb guy, dumb guy, dumb guy ...". Looks like the crew will start their brews without us.
Moral of the story: loop when code is working as intended (User entered good data). Watch out for loops when functions loops on the unexpected.
If you're asking about style, recommendations are going to be somewhat subjective.
If you're afraid that your loops may turn into true infinite loops when undesired, then you need to do something about that. For example:
redesign your code to be an explicit state machine with clearly documented and implemented conditions for when the state changes and when it remains the same
think through the various error conditions that may occur (and thus potentially cause hanging in a specific state indefinitely) and handle them the best you can, also think of those conditions that are fatal and can't be handled gracefully (do you crash and burn? do you log an error? do you show it on some dash board? do you eject the driver's seat with the driver? do you dial an emergency phone number if applicable? etc etc), incorporate that into the state machine
review the code with your peers (maybe even hire industry experts for this)
implement and use tests (unit/integration/scenario-based/scalability/stress/security/etc etc)

Is it a bad practice to output error messages in a function with one input and one output [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I was once told that functions with one input and one output(not exactly one) should not print messages when being called. But I don't understand. Is it for security or just for convention?
Let me give an example. How to deal with an attempt that access data in a sequential list with an incorrect index?
// 1. Give out the error message inside the function directly.
DataType GetData(seqList *L, int index)
{
if (index < 0 || index >= L->length) {
printf("Error: Access beyond bounds of list.\n");
// exit(EXIT_FAILURE);
}
return L->data[index];
}
// 2. Return a value or use a global variable(like errno) that
// indicates whether the function performs successfully.
StateType GetData(seqList *L, int index, int *data)
{
if (index < 0 || index >= L->length) {
return ERROR;
}
*data = L->data[index];
return OK;
}
I think there are two things going on here:
Any visible and unexpected side-effect such as writing to streams is generally bad, and not just for functions with one input and one output. If I was using a list library, and it started silently writing error messages to the same output stream I was using for my regular output, I'd consider that a problem. However, if you are writing such a function for your own personal use, and you know ahead of time that the action you want taken is always to print a message and exit(), then it's fine. Just don't force this behavior on everyone else.
This is a specific case of the general problem of how to inform callers about errors. A lot of the time, a function cannot know the correct response to an error, because it doesn't have the context that the caller does. Take malloc(), for instance. The vast majority of the time, when malloc() fails, I just want to terminate, but once in a great while I might want to deliberately fill the memory by calling malloc() until it fails, and then proceed to do something else. In this case, I don't want the function to decide whether or not to terminate - I just want it to tell me it's failed, and then pass control back to me.
There are a number of different approaches to handling errors in library functions:
Terminate - fine if you're writing a program yourself, but bad for a general purpose library function. In general, for a library function, you'll want to let the caller decide what to do in the case of an error, so the function's role is limited to informing the caller of the error.
Return an error value - sometimes OK, but sometimes there is no feasible error value. atoi() is a good case in point - all the possible values it returns could be correct translations of the input string. It doesn't matter what you return on error, be it 0, -1 or anything else, there is no way to distinguish an error from a valid result, which is precisely why you get undefined behavior if it encounters one. It's also semantically questionable from a slightly purist point of view - for instance, a function which returns the square root of a number is one thing, but a function which sometimes returns the square root of a number, but which sometimes returns an error code rather than a square root is another thing. You can lose the self-documenting simplicity of a function when return values serve two completely separate purposes.
Leave the program in an error state, such as setting errno. You still have the fundamental problem that if there is no feasible return value, the function still can't tell you that an error has occurred. You could set errno to 0 in advance and check it afterwards every time, but this is a lot of work, and may just not be feasible when you start involving concurrency.
Call an error handling function - this basically just passes the buck, since the error function then also has to address the issues above, but at least you could provide your own. Also, as R. notes in the comments below, other than in very simple cases like "always terminate on any error" it can be asking too much of a single global error handling function to be able to sensibly handle any error that might arise in a way that your program can them resume normal execution. Having numerous error handling functions and passing the appropriate ones individually to each library function is technically possible, but hardly an optimal solution. Using error handling functions in this way can also be difficult or even impossible to use correctly in the presence of concurrency.
Pass in an argument that gets modified by the function if it encounters an error. Technically feasible, but it's not really desirable to add an additional parameter for this purpose to every library function ever written.
Throw an exception - your language has to support them to do this, and they come along with all kinds of associated difficulties including unclear structure and program flow, more complex code, and the like. Some people - I'm not one of them - consider exceptions to be the moral equivalent of longjmp().
All the possible ways have their drawbacks and advantages, as of yet humanity has not discovered the perfect way of reporting errors from library functions.
In general you should make sure you have a consistent and coherent error handling strategy, which means considering whether you want to pass an error up to a higher level or handle it at the level it initially occurs. This decision has nothing to do with how many inputs and outputs a function has.
In a deeply embedded system where a memory allocation failure occurs at a critical juncture, for example, there's no point passing that error back up (and indeed you may well not be able to) - all you can do might be enter a tight loop to let the watchdog reset you. In this case there's no point reserving invalid return values to indicate error, or indeed in even checking the return value at all if it doesn't make sense to do so. (Note I am not advocating just lazily not bothering to check return values, that is a different matter entirely).
On the other hand, in a lovely beautiful GUI app you probably want to fail as gracefully as possible and pass the error up to a level where it can be either worked around / retried / whatever is appropriate; or presented to the user as an error if nothing else can be done.
It is better to use perror() to display error messages rather than using printf()
Syntax:
void perror(const char *s);
Also error messages are supposed to be sent to the stderr stream than stdout.
Yes, it's a bad practice; even worse is that you're sending the output to stdout rather than stderr. This could end up corrupting data by mixing error message in with output.
Personally, I believe very strongly that this kind of "error handling" is harmful. There is no way you can validate that the caller passed a valid value for L, so checking the validity of index is inconsistent. The documented interface contract for the function should simply be that L must be a valid pointer to an object of the correct type, and index a valid index (in whatever sense is meaningful to your code). If an invalid value for L or index is passed, this is a bug in your code, not a legitimate error that can occur at runtime. If you need help debugging it, the assert macro from assert.h is probably a good idea; it makes it easy to turn off the check when you no longer need it.
One possible exception to the above principle would be the case where the value of L is coming from other data structures in your program, but index is coming from some external input that's not under your control. You could then perform an external validation step before calling this function, but if you always need the validation, it makes sense to integrate it like you're doing. However, in that case you need to have a way to report the failure to the caller, rather than printing a useless and harmful message to stdout. So you need to either reserve one possible return value as an error sentinel, or have an extra argument that allows you to return both a result and an error status to the caller.
Return a reserved value that's invalid for a success condition. For example, NULL.
It is advisable not to print because:
It doesn't help the calling code reason about the error.
You're writing to a stream that might be used by higher level code for something else.
The error may be recoverable higher, so you might be just printing misleading error messages.
As others have said, consistency in how you deal with error conditions is also an important factor. But consider this:
If your code is used as a component of another application, one that does not follow your printing convention, then by printing you're not allowing the client code to remain faithful to its own strategy. Thus, using this strategy you impose your convention to all related code.
On the other hand, if you follow the "cleaner" solution of returning a reserved value and the client code wants to follow the printing convention, the client code can easily adapt to what you return and even print an error, by making simple wrappers around your functions. Thus, using this strategy you give the users of your code enough space to choose the strategy that best works for them and to be faithful to it.
It is always best if code deals with one thing only. It is easier to understand, it is easier to use, and is applicable in more instances.
Your GetData function that prints an error message isn't suitable for use in cases where there may not be a value. i.e. The calling code wants to try to get a value and handle the error by using a default if it doesn't exist.
Since GetData doesn't know the context it can't report a good error message. As an example higher up the call stack we can report hey you forgot to give this user an age, vs in GetData where all it knows is we couldn't get some value.
What about a multithreaded situation? GetData seems like it would be something that might get called from multiple threads. With a random bit of IO shoved in the middle it will cause contention over who has access to the console if all the threads need to write at the same time.

What should I return for errors in my functions in C?

Currently I'm returning -1 in my custom functions in C if something wrong happens and 0 for success. For instance, working with a linked list and some function needs a non-empty list to work properly. If the list passed as argument is empty, I return -1 (error) and 0 if it's not empty and the function worked without a problem.
Should I, maybe, return 1 instead of -1?
Is this the standard way of doing things in C or do you recommend a different approach?
Return a non-zero value to indicate failure. This way you can write you functions calls as so:
if(func_call())
{
doErrorHandling();
}
This convention will allow you to use any !0 value to indicate a specific error, and this will allow you to use one variable in a uniform fashion. So the body of the if shown in the example above can then have a switch statement to process the specific errors.
You can do it differently -- but if you choose to do so stick to a convention -- the win32 API (and other API's I have used) mix and match conventions unfortunately.
That sounds fine. -1 is used in I/O function because a positive return value usually means success and is the number of bytes that were processed. If you have multiple ways a function can go wrong, then you can either return different integers or set a global error variable (errno is used by the standard library) to contain the error code.
In terms of style, I prefer not to return status codes as it means my functions can't (cleanly) return anything else. Instead I would check the input before calling the function. But this is subjective and depends on the context.
I suggest that you find a way to format a fully-informative human-readable string at the point of the error, where all information is still available, and design a way to get it down thru the rest of the program, thru the user, his mobile phone, to your development team for the analysis.
This seems to me like an important feature of any design, if you want to produce better software faster. Killing error information on the way is a major crime, and C language is no excuse.
There are many schemes - but whatever you do, do it consistently!
If you don't have many failing conditions, just use 0 for error. That's because error tests are written in a simple way:
if (!list_insert(...)) { handle_error; }
Otherwise below-zero answers are good to use along with normal answers >= 0. You can use this for functions like list length, which in normal conditions will not be negative. Or if you want many error codes (-1 - nonexistant, -2 - not found, -3 ..., ...)

Resources