Error checking fprintf when printing to stderr - c

According to the docs, fprintf can fail and will return a negative number on failure. There are clearly many situations where it would be useful to check this value.
However, I usually use fprintf to print error messages to stderr. My code will usually look something like this:
rc = foo();
if(rc) {
fprintf(stderr, "An error occured\n");
//Sometimes stuff will need to be cleaned up here
return 1;
}
In these cases, is it still possible for fprintf to fail? If so, is there anything that can be done to display the error message somehow or is there is a more reliable alternative to fprintf?
If not, is there any need to check fprintf when it is used in this way?

The C standard says that the file streams stdin, stdout, and stderr shall be connected somewhere, but they don't specify where, of course.
(C11 §7.21.3 Files ¶7:
At program startup, three text streams are predefined and need not be opened explicitly -- standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
It is perfectly feasible to run a program with the standard streams redirected:
some_program_of_yours >/dev/null 2>&1 </dev/null
Your writes will succeed - but the information won't go anywhere. A more brutal way of running your program is:
some_program_of_yours >&- 2>&- </dev/null
This time, it has been run without open file streams for stdout and stderr — in contravention of the the standard. It is still reading from /dev/null in the example, which means it doesn't get any useful data input from stdin.
Many a program doesn't bother to check that the standard I/O channels are open. Many a program doesn't bother to check that the error message was successfully written. Devising a suitable fallback as outline by Tim Post and whitey04 isn't always worth the effort. If you run the ls command with its outputs suppressed, it will simply do what it can and exits with a non-zero status:
$ ls; echo $?
gls
0
$ ls >&- 2>&-; echo $?
2
$
(Tested RHEL Linux.) There really isn't a need for it to do more. On the other hand, if your program is supposed to run in the background and write to a log file, it probably won't write much to stderr, unless it fails to open the log file (or spots an error on the log file).
Note that if you fall back on syslog(3) (or POSIX), you have no way of knowing whether your calls were 'successful' or not; the syslog functions all return no status information. You just have to assume that they were successful. It is your last resort, therefore.

Typically, you'd employ some kind of logging system that could (try) to handle this for you, or you'll need to duplicate that logic in every area of your code that prints to standard error and exits.
You have some options:
If fprintf fails, try syslog.
If both fail, try creating a 'crash.{pid}.log' file that contains information that you'd want in a bug report. Check for the existence of these files when you start up, as they can tell your program that it crashed previously.
Let net connected users check a configuration option that allows your program to submit an error report.
Incidentally, open() read() and write() are good friends to have when the fprintf family of functions aren't working.
As whitey04 says, sometimes you just have to give up and do your best to not melt down with fireworks going off. But do try to isolate that kind of logic into a small library.
For instance:
best_effort_logger(LOG_CRIT, "Heap corruption likely, bailing out!");
Is much cleaner than a series of if else else if every place things could possibly go wrong.

You could put the error on stdout or somewhere else... At some point you just have to give error reporting a best effort and then give up.
The key is that your app "gracefully" handles it (e.g. the OS doesn't have to kill it for being bad and it tells you why it exited [if it can]).

Yes, of course fprintf to stderr can fail. For instance stderr could be an ordinary file and the disk could run out of space, or it could be a pipe that gets closed by the reader, etc.
Whether you should check an operation for failure depends largely on whether you could achieve better program behavior by checking. In your case, the only conceivable things you could do on failure to print the error message are try to print another one (which will almost surely also fail) or exit the program (which is probably worse than failing to report an error, but perhaps not always).

Some programs that really want to log error messages will set up an alternate stack at program start-up to reserve some amount of memory (see sigaltstack(2) that can be used by a signal handler (usually SIGSEGV) to report errors. Depending upon the importance of logging your error, you could investigate using alternate stacks to pre-allocate some chunk of memory. It might not be worth it :) but sometimes you'd give anything for some hint of what happened.

Related

Logging in C printing to both stdout and stderr without duplicates

I've implemented my own message logging functions for my C command line program, I want to be able to print Info messages to stdout, Error messages to stderr, and Warning messages to both without having duplicate messages if they output to the same location.
The Info and Error messages work fine, but for the Warning messages I have no idea how to efficiently check if stdout and stderr file streams point to the same output location. My code works, but I can't figure out why, because logically when stepping through the function it should produce duplicate entrys if stdout and stderr point to the same file.
I've checked when stdout and stderr have the same output file, they still produce different memory addresses in the pointers and fileno(stdout) and fileno(stderr) are different.
tl;dr I have code that works, but as far as I'm aware... it shouldn't. Can anyone help explain why it is working or does anyone know the correct way to solve this.
Edit: the way I'm calling the program is: myProgram >out.lis 2>out.lis
Edit 2: When calling like this it does produce duplicates: myProgram >out.lis 2>&1
My Code for the warning message:
/* Warning message, a unified function for printing warning messages */
/* Warning messages are printed to both the output log and the error log (if different) */
void warningMessage(char * msg) {
if (isatty(fileno(stderr))) {
/* Add color if printing to terminal */
fprintf(stderr, "\033[0;31;43mWarning: \033[0;30;43m%s\033[39;49m\r\n", msg);
} else {
fprintf(stderr, "\nWarning: %s\n", msg);
if (isatty(fileno(stdout))) {
fprintf(stdout, "\033[0;31;43mWarning: \033[0;30;43m%s\033[39;49m\r\n", msg);
} else {
fprintf(stdout, "\nWarning: %s\n", msg);
}
}
}
Any other pointers about my code would be helpful as well! I've only recently started learning C!
The trick is going to be fstat().
Get the fileno() for both (should be 1 and 2 respectively, but might differ for non-Unix OSes), pass these as the first parameter to fstat(), and compare the structures filled in as the second parameter. I would expect exact matches if they output to the same place. I could believe that might the timestamps might be different.
I'm afraid I can't tell you if MS-Windows has the same call or not, but it should have an equivalent.
Don't forget to flush appropriately.
Edit:
The answer by Some programmer dude notes you only need to check two fields. This is correct (except perhaps on some weird filesystems).
There are some weird things that can happen. If there are two device nodes for the same device (as in /dev/tty1 and /dev/tty1_alternate pointing to the same device) then st_ino will not match but st_rdev will. I would treat these as different, as the user is playing games with you.
It might also be good to try to check if the two opens are the same open.
(Dealing with the myprogram >out 2>out case.)
For this, you probably need to mess with some of the parameters and see if changing one changes the other. Probably the fcntl() function, using F_GETFL and F_SETFL.
A mentioned in the answer by David G. you could use fstat to check where the file descriptors really write.
On a POSIX system you use the stat structure members st_dev and st_ino to find out what files are used. If these two members are equal for both stdout and stderr then you're writing to the same file.
I also suggest you make this check only once early in your program. Redirection happens only once, and you don't need to check for it every time you want to write a message.

Should we error check every call in C?

When we write C programs we make calls to malloc or printf. But do we need to check every call? What guidelines do you use?
e.g.
char error_msg[BUFFER_SIZE];
if (fclose(file) == EOF) {
sprintf(error_msg, "Error closing %s\n", filename);
perror(error_msg);
}
The answer to your question is: "Do whatever you want", there is no written rule, BUT the right question is "What do users want in case of failure".
Let me explain, if you are a student writing a test program for example, no absolute need to check for errors: it may be a waste of time.
Now, if your code may be distributed or used by other people, that quite different: put yourself in the shoes of future users. Which message do you prefer when something goes wrong with an application:
Core was generated by `./cut --output-d=: -b1,1234567890- /dev/fd/63'.
Program terminated with signal SIGSEGV, Segmentation fault.
or
MySuperApp failed to start MySuperModule because there is not enough space on the disk.
Try to free space on disk, then relaunch the app.
If this error persists contact us at support#mysuperapp.com
As it has already been addressed in the comment, you have to consider two types of error:
A fatal error is one that kills your program (app / server / site / whatever it is). It renders it unusable, either by crashing or by putting it in some state whereby it can't do it's usable work. e.g. memory allocation, disk space ...
Non-fatal error is one where something messes up, but the program can continue to do what it's supposed to do. e.g. file not found, serve other users not requesting the thing that called the error.
Source : https://www.quora.com/What-is-the-difference-between-an-error-and-a-fatal-error
Just do error checking if your program behaviour has to behave differently in case an error is detected. Let me illustrate this with an example: Assume you have used a temporary file in your program and you use the unlink(2) system call to erase that temporary file at the end of the program. Have you to check if the file has been successfully erased? Let's analyse the problem with some common sense: if you check for errors, are you going to be able (inside the program) of doing some alternate thing to cope with this? This is uncommon (if you created the file, it's rare that you will not be able to erase it, but something can happen in the time between --- for example a change in directory permissions that forbids you to write on the directory anymore) But what can you do in that case? Is it possible to use a different approach to erase temporary file in that case. Probably not... so checking (in that case) a possible error from the unlink(2) system call will be almost useless.
Of course, this doesn't apply always, you have to use common sense while programming. Errors about writing to files should be always considered, as they belong to access permissions or mostly to full filesystems (In that case, even trying to generate a log message can be useles, as you have filled your disk --- or not, that depends) Do you know always the precise environment details to obviate if a full filesystem error can be ignored. Suppose you have to connect to a server in your program. Should the connect(2) system call failure be acted upon? probably most of the times, at least a message to the user with the protocol error (or the cause of the failure) must be given to the user.... assuming everything goes ok can save you time in a prototype, but you have to cope with what can happen, in production programs.
When i want to use return value of function than suggested to check return value before using it
For example pointer return address that can be null also.so suggested to keep null check before using it.

How are files written? Why do I not see my data written immediately?

I understand the general process of writing and reading from a file, but I was curious as to what is happening under the hood during file writing. For instance, I have written a program that writes a series of numbers, line by line, to a .txt file. One thing that bothers me however is that I don't see the information written until after my c program is finished running. Is there a way to see the information written while the program is running rather than after? Is this even possible to do? This is a hard question to phrase in one line, so please forgive me if it's already been answered elsewhere.
The reason I ask this is because I'm writing to a file and was hoping that I could scan the file for the highest and lowest values (the program would optimally be able to run for hours).
Research buffering and caching.
There are a number of layers of optimisation performed by:
your application,
your OS, and
your disk driver,
in order to extend the life of your disk and increase performance.
With the careful use of flushing commands, you can generally make things happen "quite quickly" when you really need them to, though you should generally do so sparingly.
Flushing can be particularly useful when debugging.
The GNU C Library documentation has a good page on the subject of file flushing, listing functions such as fflush which may do what you want.
You observe an effect solely caused by the C standard I/O (stdio) buffers. I claim that any OS or disk driver buffering has nothing to do with it.
In stdio, I/O happens in one of three modes:
Fully buffered, data is written once BUFSIZ (from <stdio.h>) characters were accumulated. This is the default when I/0 is redirected to a file or pipe. This is what you observe. Typically BUFSIZ is anywhere from 1k to several kBytes.
Line buffered, data is written once a newline is seen (or BUFSIZ is reached). This is the default when i/o is to a terminal.
Unbuffered, data is written immediately.
You can use the setvbuf() (<stdio.h>) function to change the default, using the _IOFBF, _IOLBF or _IONBF macros, respectively. See your friendly setvbuf man page.
In your case, you can set your output stream (stdout or the FILE * returned by fopen) to line buffered.
Alternatively, you can call fflush() on the output stream whenever you want I/O to happen, regardless of buffering.
Indeed, there are several layers between the writing commands resp. functions and the actual file.
First, you open the file for writing. This causes the file to be either created or emptied. If you write then, the write doesn't actually occur immediately, but the data are cached until the buffer is full or the file is flushed or closed.
You can call fflush() for writing each portion of data, or you can actually wait until the file is closed.
Yes, it is possible to see whats written in the file(s). If you programm under Linux you can open a new Terminal and watch the progress with for example "less Filename".

C Language: How is it possible for your program to continue running for a little bit after an "assert()" has failed?

I am currently (don't ask why :P) implementing my own versions of malloc() and free(), and have intentionally placed an assert(0) at the first line of free() for current debugging purposes.
A driver program is testing a random sequence of these malloc() and free() to test the correctness of my implementations.
When I run the driver, however, the shell prints out that "Assertion '0' failed", keeps running for a little bit longer, and then prints "Aborted". Actually, it looks like it is able to even call malloc() several times between reporting the failure of the assertion and then finally reporting that the program has aborted. I am sure of this because of certain printf statements I have placed in the code to print out certain variables for debugging purposes.
I am not asking for any help at all about implementing malloc() and free(). Would just like to know what is means when it seems that the program continues to run for a short time (even possibly calling other user-defined functions) even after an assertion has been reported to fail.
If you're seeing 'assertion failed', followed by debugging prints, followed by an exit, there are two obvious possibilities.
One is that the assertion message and the debugging prints are going into two different buffered output streams (e.g. stderr and stdout) that are not getting flushed in the same order they are filled.
Another is that multiple threads of execution are hitting malloc().
If you're on a glibc-based system, the issue is probably that fprintf calls malloc internally, and assert in turn uses fprintf to print the assertion failure message. This of course is a very bad design, as printing error messages from out-of-memory conditions will always fail (among many other problems), but that's how it is...

Retrieving stdin after using the redirection operator <

For a programming assignment, we have the following requirements:
It needs to be a command-line program written in C.
It needs to read text from a text document. However, we are to do this by using the Unix redirection operator < when running the program rather than having the program load the file itself. (So the program reads the text by pretending it's reading from stdin.)
After reading the data from the file, the program is to poll the user for some extra information before doing its job.
After much research, I can't find a way to retrieve the "old" stdin in order to accomplish part (3). Does anybody know how or if this is even possible?
Technically part (3) is part of a bonus section, which the instructor probably didn't implement himself (it's very lengthy), so it's possible that this is not possible and it's an oversight on his part. However, I certainly don't want to jump to this conclusion.
On linux, i would open the controlling terminal /dev/tty.
Which OS? On Linux the usual trick to accomplish this is to check if stderr is still connected to a tty:
if (isatty(2))
and if so, open a new reading file descriptor to that terminal:
new_stdin = open("/proc/self/fd/2", O_RDONLY);
then duplicate the new file descriptor to stdin (which closes the old stdin):
dup2(new_stdin, 0);
(If stderr has also been redirected, then isatty(2) will return false and you'll have to give up.)
If you run the program like this:
myprog 3<&0 < filename
then you get file descriptor 3 set up for you as a duplicate of stdin. I don't know if this meets the requirements of your assignment, but it might be worth an experiment.

Resources