Casting to void doesn't remove warn_unused_result error - c

In a test, I'm discarding anything from stderr since it clutters the output of the test case. I'm using the following code:
freopen("/dev/null", "w", stderr);
When compiling with -Wall -Werror, I get the error
error: ignoring return value of ‘freopen’, declared with attribute warn_unused_result
which is expected. However, the usual solution of casting to void doesn't seem to work. That is, changing the code to
(void) freopen("/dev/null", "w", stderr);
still produces the same warning. I don't care if this function fails since the worst case scenario is a bit of extra output. Any other way I can fix this?
EDIT: I know I could introduce an extra unnecessary variable. I would really like to know why casting to void doesn't work.
UPDATE:
I decided to go with this:
FILE *null = fopen("/dev/null", "w");
if (null) { fclose(stderr); stderr = null; }
After reading the freopen documentation more carefully, I see that if opening /dev/null fails, stderr will still be destroyed. This solves that problem.

A little heavy on the GCC extensions, but no externally visible variables:
#define ignore_result(x) ({ typeof(x) z = x; (void)sizeof z; })
ignore_result(freopen("/dev/null", "w", stderr));

Why not simply use the result, as the warning suggests you should.
if (freopen("/dev/null", "w", stderr) == 0)
...oops...lost stderr...hard to report errors...
Since the function is declared with the 'warn_unused_result' attribute, you will get the warning unless you use the return value. Since the function either returns null on failure or the file stream argument on success, you might think about assigning the result. However, you should not
assign to stderr like that (see below), so this is a bad idea:
stderr = freopen("/dev/null", "w", stderr);
Theoretically, you should make that check; there are dire (and implausible) circumstances under which you could fail to open "/dev/null".
Footnote 229 in the C99 standard notes:
229) The primary use of the freopen function is to change the file associated with a standard text stream
(stderr, stdin, or stdout), as those identifiers need not be modifiable lvalues to which the value
returned by the fopen function may be assigned.
Therefore, the assignment is ill-advised. But testing the return value would deal with the compiler warning and might help prevent core dumps too. It is unlikely to improve your code coverage figures, though (the error path is not going be taken very often; it will be hard to force coverage of the error handling).
Note that the POSIX description of freopen() has some moderately caustic comments about the design of freopen(), which was invented by the C standard committee (1989 version), presumably without input from POSIX.

int tossmeout = freopen("/dev/null", "w", stderr);
As comments below try
FILE *tossmeout = freopen("/dev/null", "w", stderr);
and
(void *)freopen("/dev/null", "w", stderr);

If you really have to use the C language (not C++) then you may use this workaround:
inline void ignore_result_helper(int __attribute__((unused)) dummy, ...)
{
}
#define IGNORE_RESULT(X) ignore_result_helper(0, (X))
For example
typedef struct A
{
int x;
} A;
__attribute__((warn_unused_result)) A GetA()
{
A const a;
return a;
}
int main()
{
IGNORE_RESULT(GetA());
return 0;
}

Related

Checking stdin and stdout using external function

Does scope impact, checking for errors while obtaining input from stdin or outputting to stdout? For example if I have a code body built in the following way:
void streamCheck(){
if (ferror(stdin)){
fprintf(stderr, "stdin err");
exit(1);
}
if (ferror(stdout)){
fprintf(stderr, "stdout err");
exit(2);
}
}
int main(){
int c = getchar();
streamCheck();
...
putchar(c)
streamCheck();
}
are the return values of ferror(stdin) / ferror(stdout) impacted by the fact that I am checking them in a function rather than in the main? If there is a better way to do this also let me know I am quite new to C.
As long as you call ferror on a particular stream before calling any other function on that stream you should be fine.
It doesn't matter that ferror is being called from a different function that getchar or putchar was called from.
There is no problem in your function. ferror() checks the error indicator of the FILE * that is passed as argument. In other words, the error indicator is a property of the file object and is directly obtainable from the FILE * pointer. Therefore, no matter where you call ferror() from, it will be able to determine if an error happened with the FILE * that is passed as argument (that is, of course, if the argument is valid).
are the return values of ferror(stdin) / ferror(stdout) impacted by
the fact that I am checking them in a function rather than in the
main?
The return value of ferror() is characteristic of the current state of the stream that you provide as an argument. At any given time, the stdin provided by stdio.h refers to the same stream, with the same state, in every function, including main(). Therefore, you will obtain the same result by calling the ferror() function indirectly, via an intermediary function, as you would by calling it directly.
NEVERTHELESS, the approach you present in your example is poor. For the most part, C standard library functions indicate whether an error has occurred via their return values. In particular, getchar() returns a special value, represented by the macro EOF, if either the end of the file is encountered or an error occurs. This is typical of the stdio functions. You should consistently test functions' return values to recognize when exceptional conditions have occurred. For stream functions, you should call ferror() and / or feof() only after detecting such a condition, and only if you want to distinguish between the end-of-file case and the I/O error case (and the "neither" case for some functions). See also "Why is 'while ( !feof (file) )' always wrong?"
Personally, I probably would not write a generic function such as your streamCheck() at all, as error handling is generally situation specific. But if I did write one, then I'd certainly have it test just one stream that I specify to it. Something like this, for example:
void streamCheck(FILE *stream, const char *message, int exit_code) {
if (ferror(stream)) {
fputs(message, stderr);
exit(exit_code);
}
}
int main(void) {
int c = getchar();
if (c == EOF) {
streamCheck(stdin, "stdin err\n", 1);
}
// ...
if (putchar(c) == EOF) {
streamCheck("stdout err\n", 2);
}
}

Calling yyrestart function in bison with no arguments causing sigsev on El Capitan

I'm curious about the function signature for yyrestart - namely in the lexer file I see that the signature is:
void yyrestart (FILE * input_file )
In my code I use yyrestart to flush the buffer, but I haven't been passing it any argument, it's just been empty:
yyrestart();
Which is currently working on every system we test on except for the latest version of OS X. Stepping through with GDB, it's clear on my rhel machine that just calling with no argument sets the file pointer to NULL:
yyrestart (input_file=0x0) at reglexer.c:1489
Whereas on El Capitan it comes through as garbage, which is causing the mem error later in generated code:
yyrestart (input_file=0x100001d0d) at reglexer.c:1489
I can't for the life of me figure out where yyrestart() is defined. Is there some macro in yacc/flex that defines the behavior for calling yyrestart with no arguments? If not, how is this even compiling?
*********** EDIT to Clarify the Compiling Question ************
As a small snippet to see what I'm talking about - this is what I have in a my .y file, which is executing the parser (this is a SLIGHT modification of what's this example):
int main() {
FILE *myfile = fopen("infile.txt", "r");
if (!myfile) {
fprintf(stderr, "can't open infile.txt\n");
return 1;
}
calcYYin = myfile;
do {
calcYYparse();
} while (!feof(calcYYin));
calcYYrestart();
return 0;
}
I can build that repository with whatever I want passed in as arguments to calcYYrestart() on that line. Substituting
calcYYrestart('a', 1, 5, 'a string');
still lets me compile the entire program using make (but a get a segv with bad input). But looking through the generated parcalc.c file, I don't see anything that would allow me to call calcYYrestart with anything except for a file pointer. I only see this as the prototype:
void calcYYrestart (FILE * input_file );
Where's the magic happening with the compiler that lets me put whatever I want as arguments to that generated function?
You are expecting C to gently lead you through the maze, holding your hand, chiding you when you err and applauding your successes.
These may not be unreasonable expectations for a language, but C is not that language. C does what you tell it to do, nothing more, and when your instructions fall short of clarity, it simply lets you fall.
Although, in its defense, you can ask it to be a bit more verbose. If you specify -Wall on the command line (at least with gcc and clang), the compiler will provide you with some warnings. [See note 1.]
In this case, it probably would have warned you that calcYYrestart was not declared, which would make it your responsibility to get the arguments right. The function is declared and defined in the lexer, but here you are using it in the parser, which is a separate compilation unit. You really should declare it in the parser prologue, but nothing will enforce the correctness of that declaration. (C++ would fail to link in that case, but C does not record argument types in the formal function name.)
It's worth noting that there are many problems with the sample code you are basing your work on. I'd suggest looking for a better bison/flex tutorial, or at least reading through the sections in the flex manual about how input is handled.
Here, I've added some annotations to the original example, which shows the calc.y bison input file:
/* This is unnecessary, since `calcYYparse` is defined in this file.
extern int calcYYparse();
*/
extern FILE *calcYYin;
/* Command line arguments are always good */
int main(int argc, char** argv) {
/* If there is an argument, use it. Otherwise, stick with stdin */
/* There is no need for a local variable. We can just use yyin */
if (argc > 1) {
calcYYin = fopen(argv[1], "r");
if (!calcYYin) {
fprintf(stderr, "can't open infile.txt\n");
return 1;
}
}
/* calcYYin = myfile; */
/* This loop is unnecessary, since yyparse parses input until it
* reaches EOF, unless it hits an error. And if it hits an error, it
* will call calcYYerror (below), which in turn calls exit(1), so it
* never returns.
*/
/* do { */
calcYYparse();
/* } while (!feof(calcYYin)); */
return 0;
}
void calcYYerror(const char* s) {
fprintf(stderr, "Error! %s\n", s);
/* Valid arguments to `exit` are 0 and small positive integers. */
exit(EXIT_FAILURE);
}
Of course, you probably don't want to just blow up the world if you hit a syntax error. The intention was probably to discard the rest of the line and then continue the parse. In that case, for obvious reasons, callYYerror should not call exit().
By default, after yyerror is called, yyparse returns immediately (after cleaning up its local storage) with an error indication. If you want it to instead continue, then you need to use an error production, which would be the best solution.
You could also simply call yyparse again, as in the example. However, that leaves an unknown amount of the input file in the flex buffer. There is no reason to believe that the buffer contains exactly the rest of the line in error. Since flex scanners typically read there input in large chunks (except for interactive input), resetting the input file with yyrestart will discard a random amount of input, leaving the input file pointer at a random position in the file, which probably does not correspond with the beginning of a new line.
Even if that were not the case, as with unbuffered (interactive) input, it is entirely possible that the error was detected at the end of a line, in which case the new line will already have been consumed. So discarding to the end of the current line will result in discarding the line following the error.
Finally, the use of feof(input) to terminate input loops is a well-known antipattern, and should be avoided in favour of terminating when an EOF is encountered while reading input. In the case of flex-generated scanners, when EOF is detected, the current input is discarded, and then (if yywrap doesn't succeed in creating a new input), the END indication is returned to the parser. By then, yyin is no longer valid (because it was discarded), and calling feof on it is undefined behaviour.
Notes
You get even more warnings by also specifying -Wextra. And you can make the compiler a little stricter by telling it to use the latest standard, -std=c11, instead of the 1989 version augmented with various gcc extensions, mostly now outdated.)

Why gcc does not give a warning or error when invalid mode is used in fopen()?

I am practicing some practice questions in FILE IO in C. Below is one of the programs.
#include<stdio.h>
#include<stdlib.h>
int main()
{
char fname[]="poem.txt";
FILE *fp;
char ch;
fp = fopen ( fname, "tr");
if (fp == NULL)
{
printf("Unable to open file...\n");
exit(1);
}
while((ch =fgetc(fp)) != EOF)
{
printf("%c",ch);
}
printf("\n");
return 0;
}
As you can see in the statement
fp = fopen ( fname, "tr");
The mode "tr" is not a valid mode (as I understand). I was expecting gcc to give an error (or a warning) while compiling the above program. However, gcc does not give any error (or warning) while compiling it.
However, as expected, when i run the program it exits printing "Unable to open file..." which means fopen() returned NULL , because there was error while opening file.
-bash-4.1$ ./a.out
Unable to open file...
-bash-4.1$
(The file poem.txt exists so this is because of the invalid mode given to fopen(). I checked by changing the mode to "r" and it works fine displaying the content of "poem.txt")
-bash-4.1$ ./a.out
THis is a poem.
-bash-4.1$
I was expecting gcc to give an error (or warning) message for the invalid mode.
Why gcc does not give any error (or warning) for this ?
the compiler doesn't check what you do, it only checks the syntax.
However, at run time, if the code is written like so:
#include<stdio.h>
#include<stdlib.h>
int main()
{
char fname[]="poem.txt";
FILE *fp;
char ch;
fp = fopen ( fname, "tr");
if (fp == NULL)
{
perror( "fopen for poem.txt failed");
exit( EXIT_FAILURE );
}
while((ch =fgetc(fp)) != EOF)
{
printf("%c",ch);
}
printf("\n");
return 0;
}
then a proper error message is output:
...$ ./untitled
fopen for poem.txt failed: Invalid argument
This is Undefined Behavior:
Per Annex J.2 "Undefined Behavior", it is UDB if:
—The string pointed to by the mode argument in a call to the fopen function does not exactly match one of the specified character sequences (7.19.5.3).
Although Annex J is informative, looking at §7.19.5.3:
/3 The argument mode points to a string. If the string is one of the following, the file is open in the indicated mode. Otherwise, the behavior is undefined.
Basically, the compiler can blow you off here - a standard library function name (and behavior) can be used outside of the inclusion of a standard header (for example, non-standard extensions, completely user-defined behavior, etc.). The Standard specifies what a conforming library implementation shall include and how it shall behave, but does not require you to use that standard library (or define behavior for a specific implementation explicitly specified as UDB territory: at this point, if your parameter types match it's a legal function call).
A really good lint utility might help you here.
How is the compiler supposed to know what the valid arguments for a function are?
To do it you'd be building too much knowledge in the compiler - it would have to recognize functions and their parameters by name. What if you want to override the function? What if different modes are valid on different platforms?
In Windows programming, "tr" is a valid mode is not a valid mode, although "rt" is. The t means text and the r means read. (If you are using gcc and linking to MS's C runtime then you will be able to use this).
However you still don't see t very often because it is the default and therefore redundant; the other option for this setting is b meaning binary. But MS do seem to explicitly use t in their examples to make it clear that translation is intended.
The behaviour of text mode and binary mode for a stream is implementation-defined, although the intent is that binary mode reads the characters exactly as they appear on disk, and text mode may perform translations relevant to text processing; most famously, converting \r\n in MS text files to \n in your program.

fclose() always returns EOF

I have a function that reads integers with certain format from a file.
It works fine as desired, but whenever I tried to close the file with fclose(),
fclose() always returns EOF.
Any suggestions why? I am a student and still learning.
I have put the code below. Please let me know if you need the "processing" code. Thanks :)
// Open the file
FILE *myFile = fopen(fileName, "r");
if(myFile == NULL){
//Handle error
fprintf(stderr, "Error opening file for read \n");
exit(1);
}
while(myFile != EOF)
{
// read and process the file
// this part works.
}
// always returns EOF here. WHY?
if (fclose(myFile) == EOF) {
// Handle the error!
fprintf(stderr, "Error closing input file.\n");
exit(1);
}
printf("Done reading the file.");
EDIT:
Thank you for all the replies. Sorry I cannot post the code as this is part of my homework. I was trying to get some help, I am not asking you guys to make the code for me. Posting code is illegal according to my Prof (since other students can see and probably copy, that's what he told me). I can only post the code after Sunday. For now, I will try to modify my code and avoid using fscanf. Thanks and my apology.
This:
while(myFile != EOF)
is actually illegal (a constraint violation). Any conforming C compiler is required to issue a diagnostic; gcc, by default, merely issues a warning (which does qualify as a "diagnostic").
If gcc gave you a warning, you should pay attention to it; gcc often issues warnings for things that IMHO should be treated as fatal errors. And if it didn't give you a warning, you're probably invoking it with options that disable warnings (which is odd, because it does produce that warning by default). A good set of options to use is -Wall -Wextra -std=c99 -pedantic (or adjust the -std=... option to enforce a different version of the standard if you like).
myFile is of pointer type, specifically FILE*. EOF is of type int, and typically expands to (-1). You cannot legally compare a pointer value to an integer value (except for the special case of a null pointer constant, but that doesn't apply here.)
Assuming the program isn't rejected, I'd expect that to result in an infinite loop, since myFile would almost certainly never be equal to EOF.
You could change it to
while (!feof(myFile))
but that can cause other problems. The correct way to detect end-of-file while reading from a file is to use the value returned by whatever function you're using read the data. Consult the documentation for the function you're using to see what it returns when it encounters end-of-file or an error condition. The feof() function is useful for determining, after you've finished reading, whether you encountered end-of-file or an error condition.
Since there is nothing that you can do to a regular file open in read-only-mode that would cause a fclose to error out, you very probably have a bug in the code you didn't show which is stomping on the myFile structure.
Also the test myFile != EOF will never be true because you set it to the return of fopen which will never give you EOF and you already checked it for NULL Did you mean something like:
while((c = fgetc(myFile)) != EOF) {
// do stuff
}
What the errno said? add this to your code:
#include <errno.h>
#include <string.h>
if (fclose(myFile) == EOF) {
// Handle the error!
fprintf(stderr, "Error closing input file. and errno = %d, and error = %s\n", errno, strerror(errno));
exit(1);
}
Hope this help.
Regards.
while(myFile != EOF)
{
// read and process the file
// this part works.
}
If this part is ok then
fclose(myFile);
definitely return you EOF.Because the while loop terminates only myFile == EOF(this is a wrong comparison, do not ignore warnings).comparision between pointer and int.As EOF is a macro defined in stdio.h file And according to glibc it -1.Your loop terminates that means your myFile pointer changed to EOF some whire.This is your mistake.
just go through your code you must be change the FILE pointer myFile which should not be over written by you As it point to a file structure
which is used for all file operation.
NOTE
myFile which is a pointer to a file should not be used as lvalue in any assignment statement.
Change while(myFile != EOF){...} by while(!feof(myFile)){...}
myFile is a pointer to a FILE struct (a memory address). Not a "end of file" indicator.

C write() function not working

I am trying to write into a file, but it is not working. I can open a file, but while writing in the file using write function, tt is writting in the stdout itself, and the content of the file I opened remain unchanged.
#include<stdio.h>
#include<sys/file.h>
#include<sys/types.h>
#include<sys/stat.h>
#include<limits.h>
#include<fcntl.h>
#include<stdlib.h>
#include<sys/uio.h>
main() {
char fn[30];
int fd,i=0;
int actualbytes,bytesstored;
char buffer[100];
printf("\nEnter the filename with path");
scanf("%s",fn);
if(fd=open(fn,O_WRONLY|O_CREAT,S_IWUSR|S_IWUSR)<0)
{
perror("open");
exit(0);
}
else
{
write(stdout,"\n\nEnter the contents for the file\n");
write(stdout,"press CTRl+D at the end of the file\n\n");
fflush(stdout);
while((buffer[i]=getc(stdin))!=EOF) i++;
buffer[i]='\0';
bytesstored=sizeof(buffer);
if(actualbytes=write(fd,buffer,bytesstored)<0)
{
perror("write");
exit(0);
}
else
{
write(stdout,"\n\nfile is opened successfully");
write(stdout,"\nThe contents are written"); fflush(stdout);
}
if(close(fd)<0)
{
perror("close");
exit(0);
}
else
printf("\nfile is closed");
}
}
< has higher precedence than =.
if((fd=open(fn,O_WRONLY|O_CREAT,S_IWUSR|S_IWUSR))<0)
There are a lot of problems with this code. Make sure you enable warnings on your compiler, it should complain about quite a few things:
write() is in unistd.h. You're not including that, so your program cannot be correct.
Once you include that, you'll notice (with warnings enabled), that you're calling it incorrectly at least 5 times: stdout is not a file descriptor, it's a FILE*.
Use the printf() family of functions to print things on the console.
Second big problem is your if statements that have assignments in them.
if (a = b < 0) { ... }
is equivalent to:
if (a = (b < 0)) { ... }
So it's not doing what you think it is. You need to use parenthesis:
if ((fd = open(...)) < 0) { ... }
Note: you're always writing the full buffer to the file. Not all of it has been initialized. That doesn't sound like what you're after. Try only writing the data that you've read (you have that stored in i).
Please note, from stdin(3):
#include <stdio.h>
extern FILE *stdin;
extern FILE *stdout;
extern FILE *stderr;
stdin, stdout, are standard IO FILE * streams, for use with fprintf(3), fgets(3), and so forth.
read(2) and write(2) take filedescriptors (which are represented as ints).
Keeping the C-supplied standard IO streams and the Unix-supplied filedescriptors separate in your mind is vital to sane Unix programming; sorry it's complicated :) but it's well worth becoming an expert.
I suggest changing all your write(stdout,... to fprintf(stdout,....
Ah, I see Ignacio has spotted the core problem :) it's hard to put one past him.
Another issue to worry about, your scanf() call doesn't limit the length of input to the size of your buffer. Someone could overflow your buffer and scribble data of their choosing all over memory. It's not a big deal when you're learning, but this kind of bug is exactly how the first Internet worm infected some new machines, so it is well worth not making the same mistake again.
And the last issue I spotted is how you're writing out your buffer:
buffer[i]='\0';
bytesstored=sizeof(buffer);
if(actualbytes=write(fd,buffer,bytesstored)<0)
sizeof(buffer) is always going to return 100, because that is what you declared for buffer at the start of your program. So replace with this:
buffer[i++]='\0';
if(actualbytes=write(fd,buffer,i)<0)
As the others noted, there are a lot of problems with your code. Always instruct your compiler to show warnings. If you are using GCC then pass the argument -Wall to show all warnings. Now, if I do so with your code it suggests the following:
write.c:9: warning: return type defaults to ‘int’
write.c: In function ‘main’:
write.c:18: warning: suggest parentheses around assignment used as truth value
write.c:25: warning: implicit declaration of function ‘write’
write.c:34: warning: suggest parentheses around assignment used as truth value
write.c:45: warning: implicit declaration of function ‘close’
write.c:55: warning: control reaches end of non-void function
The first one means that your function main() defaults to int but you should always state a return type. On line 18 and 34 you need parentheses around the assignments before testing with < (as Ignacio said above). On line 25 and 45 it can't find the prototype for write() and close(), so you need to include the right header files. The last one means that you need to have a return statement (because it defaulted to type int).
just include and the warning will disappear.

Resources