Difference between perror() and printf() - c

I had read that both perror() and printf() write to the terminal screen. But perror() writes to stderr while printf() writes to stdout. So, to print errors why is perror() used when printf() can do it.

printf() cannot write to stderr. fprintf() can. perror() always does.
There is no requirement that writing to either stdout or stderr writes to a terminal screen - that is up to the implementation (since not all systems even have a terminal). There is also no requirement that writing to stdout and stderr results in writing to the same device (e.g. one can be redirected to a file, while the other is redirected to a pipe).
perror() will be implemented with built-in knowledge of the meanings of error codes, represented by the static errno, which is used by various functions in the standard library to report error conditions. The meanings of particular values are implementation defined (i.e. they vary between compilers and libraries).

Because there could be configurations where you want stderr printed to the console but the other output not printed at all (for example, to remove verbosity). In other cases you may need to redirect stderr to write to a file, this is useful when you are in production and that file can be used to understand what went wrong on a remote computer you can't debug yourself.
In general, you gain more control on how console outputs are treated depending on their type.
See this answer to understand how you can do stream redirection in code.
Or, see this link on how you can force stream redirection to file or ignore a stream on an already compiled program (while invoking it in bash)

In addition to other answers, you might use fprintf(3) on stderr and errno(3) with strerror(3) like
fprintf(stderr, "something wrong: %s\n", strerror(errno));
On GNU libc systems (many Linux systems), you could use instead %m conversion specifier instead:
fprintf(stderr, "something wrong: %m\n");
You conventionally should output error messages to stderr (see stderr(3)); see also syslog(3) to use system logging.
Don't forget to end the format string with \n since stderr is often line buffered (but sometimes not) or else use fflush(3)
For example, you might want to show both the error, the filename and the current directory on fopen failure:
char* filename = somefilepath();
assert (filename != NULL);
FILE* f = fopen(filename, "r");
if (!f) {
int e = errno; // keep errno, it could be later overwritten
if (filename[0] == '/') /// absolute path
fprintf(stderr, "failed to open %s : %s\n", filename, strerror(e));
else { // we also try to show the current directory since relative path
char dirbuf[128];
memset (dirbuf, 0, sizeof(dirbuf));
if (getcwd(dirbuf, sizeof(dirbuf)-1))
fprintf(stderr, "failed to open %s in %s : %s\n",
filename, dirbuf, sterror(e));
else // unlikely case when getcwd failed so errno overwritten
fprintf(stderr, "failed to open %s here : %s\n",
filename, sterror(e));
};
exit(EXIT_FAILURE); // in all cases when fopen failed
}
Remember that errno could be overwritten by many failures (so we store it in e, in the unlikely case that getcwd fails and overwrite errno).
If your program is a deamon (e.g. has called daemon(3)) you'll better use system log (i.e. call openlog(3) after calling daemon) since daemon can redirect stderr to /dev/null

There are three standard stream stdin stdout stderr. You can refer to know what is important of different stream.
For error messages and diagnostics ,stderr is used , to print on stderr
Perror is used. printf can not do that. Perror is also used to handle errors from system call
fd = open (pathname, flags, mode);
if (fd == -1) {
perror("open");
exit(EXIT_FAILURE);
}
You can refer more about this in book The linux programming interface
void perror(const char *s)
Perror prints message in following sequence :
argument of s , a colon , a space , a short message concerning error whose error code currently in errnoand newline
In standard C if s is null pointer than only message will be printed . other things will be ignored
To understand more you can also refer page 332 of The complete reference C

A big advantage of using perror():
It is sometimes very useful to redirect stdout into /dev/null to only have access to errors since the verbosity of stdout might hide the errors that we need to fix.

perror
The general purpose of the function is to halt the execution process due to an error. The error message produced by perror is platform-depend. You can also print your own error message also.
printf
The general purpose of the function is to print message user defined and continue the execution.

Related

proper way of writing to /sys or /proc filesystem in c

what is a proper way of writing to /proc or /sys filesystem in linux in c ?
Can I write as I would in any other file, or are there special considerations I have to be aware of?
For example, I want to emulate echo -n mem > /sys/power/state. Would the following code be the right way of doing it?
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
FILE *f;
f = fopen("/sys/power/state", "w");
if(f == NULL) {
printf("Error opening file: /sys/power/state\n");
exit(1);
}
fprintf(f,"%s","mem");
fclose(f);
}
Your approach lacks some error handling in the write operation.
The fprintf (or fwrite, or whatever you prefer to use) may fail, e.g. if the kernel driver behind the sysfs file doesn't like what you're writing. E.g.:
echo 17 > /sys/class/gpio/export
-bash: echo: write error: Invalid argument
In order to catch those errors, you MUST check the output of the fprintf to see if all characters that you expected to write were written, and you should also check the output of ferror(). E.g. if you're writing "mem", fprintf should return 3 and there should not be any error set in the stream.
But one additional thing is missing: sysfs are not standard files. For the previous write error to be returned correctly you MUST disable buffering in your stream, or otherwise the fprintf (or fwrite)) may happily end without any error. You can do that with setvbuf like this just after the fopen.
setvbuf (f, NULL, _IONBF, 0);

check if FILE* is stdout; portable?

I am currently writing a piece of code whose intended usage is this:
program input.txt output.txt
or
program input.txt
in which case it defaults to stdout.
This is the code I have now (within main()):
FILE *outFile;
if (argc < 3) {
outFile = stdout;
} else {
fprintf(stdout, "Will output to file %s\n", argv[2]);
outFile = fopen(argv[2], "w");
if (outFile == NULL) {
fprintf(stderr, "ERR: Could not open file %s. Defaulting to stdout\n", argv[2]);
outFile = stdout;
}
}
/* ... write stuff to outFile... */
if (argc < 3 && outFile != stdout) {
fclose(outFile);
}
These are my concerns: first of all, will this successfully open and close outFile when provided? Also, will this successfully not close stdout? Can anything bad happen if I close stdout?
Also, is this portable? I compile with gcc but this project will be evaluated by a professor using Windows.
Apologies if this is a bit of a mess of a question. I come from Python and am not a CS major (I'm studying mathematics).
Yes, it's portable and it's okay.
Yes, it's portable. You assigned outfile = stdout, so they will be equal as long as you don't reassign either of them elsewhere in the program.
You don't really need the argc < 3 test as well -- the two conditions should always be the same, since you only do the assignment when that's true.
In any program that writes significant data to stdout, you should close stdout immediately before exiting, so that you can check for and report delayed write errors. (Delayed write errors are a design mistake; it ought to be impossible for fclose or close to fail. But we are stuck with them.)
The usual construct is, at the very end of main,
if (ferror(stdout) || fclose(stdout)) {
perror("stdout: write error");
return 1;
}
return 0;
Some programs stick an fflush in there too, but ISO C requires fclose to perform a fflush, so it shouldn't be necessary. This construct is entirely portable.
It's important for this to be the very last thing you do before exiting. It is relatively common for libraries to assume that stdout is never closed, so they may malfunction if you call into them after closing stdout. stdin and stderr are also troublesome that way, but I've yet to encounter a situation where one wanted to close those.
It does sometimes happen that you want to close stdout before your program is completely done. In that case you should actually leave the FILE open but close the underlying "file descriptor" and replace it with a dummy.
int rfd = open("/dev/null", O_WRONLY);
if (rfd == -1) perror_exit("/dev/null");
if (fflush(stdout) || close(1)) perror_exit("stdout: write error");
dup2(rfd, 1);
close(rfd);
This construct is NOT portable to Windows. There is an equivalent, but I don't know what it is. It's also not thread-safe: another thread could call open in between the close and dup2 operations and be assigned fd 1, or it could attempt to write something to stdout in that window and get a spurious write error. For thread safety you have to duplicate the old fd 1 and close it via that handle:
// These allocate new fds, which can always fail, e.g. because
// the program already has too many files open.
int new_stdout = open("/dev/null", O_WRONLY);
if (new_stdout == -1) perror_exit("/dev/null");
int old_stdout = dup(1);
if (old_stdout == -1) perror_exit("dup(1)");
flockfile(stdout);
if (fflush(stdout)) perror_exit("stdout: write error");
dup2 (new_stdout, 1); // cannot fail, atomically replaces fd 1
funlockfile(stdout);
// this close may receive delayed write errors from previous writes
// to stdout
if (close (old_stdout)) perror_exit("stdout: write error");
// this close cannot fail, because it only drops an alternative
// reference to the open file description now installed as fd 1
close (new_stdout);
Order of operations is critical: the open, dup and fflush calls must happen before the dup2 call, both close calls must happen after the dup2 call, and stdout must be locked from before the fflush call until after the dup2 call.
Additional possible complications, dealing with which is left as an exercise:
Cleaning up temporary fds and locks on error, when you don't want to stop the whole program on error
If the thread might be canceled mid-operation
If a concurrent thread might call fork and execve mid-operation

stdio to terminal after close(STDOUT_FILENO) behavior

I am wondering why uncommenting that first printf statement in the following program changes its subsequent behavior:
#include <unistd.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
int main() {
//printf("hi from C \n");
// Close underlying file descriptor:
close(STDOUT_FILENO);
if (write(STDOUT_FILENO, "Direct write\n", 13) != 13) // immediate error detected.
fprintf(stderr, "Error on write after close(STDOUT_FILENO): %s\n", strerror(errno));
// printf() calls continue fine, ferror(stdout) = 0 (but no write to terminal):
int rtn;
if ((rtn = printf("printf after close(STDOUT_FILENO)\n")) < 0 || ferror(stdout))
fprintf(stderr, "Error on printf after close(STDOUT_FILENO)\n");
fprintf(stderr, "printf returned %d\n", rtn);
// Only on fflush is error detected:
if (fflush(stdout) || ferror(stdout))
fprintf(stderr, "Error on fflush(stdout): %s\n", strerror(errno));
}
Without that first printf, the subsequent printf rtns 34 as if no error occured even though the connection from the stdout user buffer to the underlying fd has been closed. Only on a manual fflush(stdout) does the error get reported back.
But with that first printf turned on, the next printf reports errors as I would expect.
Of course nothing is written to the terminal(by printf) after the STDOUT_FILENO fd has been closed in either case.
I know it's silly to close(STDOUT_FILENO) in the first place here; this is an experiment I stumbled into and thinking someone more knowledgeable in these areas may see something instructive to us in it..
I am on Linux with gcc.
If you strace the both programs, it seems that stdio works so that upon first write, it checks the descriptor with fstat to find out what kind of file is connected to the stdout - if it is a terminal, then stdout shall be line-buffered, if it is something else, then stdout will be made block-buffered. If you call close(1); before the first printf, now the initial fstat will return EBADF and as 1 is not a file descriptor that points to a character device, stdout is made block-buffered.
On my computer the buffer size is 8192 bytes - that many bytes can be buffered to be written to stdout before the first failure would occur.
If you uncomment the first printf, the fstat(1, ...) succeeds and Glibc detects that stdout is connected to a terminal; stdout is set to line-buffered, and thus because printf after close(STDOUT_FILENO)\n ends with newline, the buffer will be flushed right away - which will result in an immediate error.

When should I use perror("...") and fprintf(stderr, "...")?

Reading the man pages and some code did not really help me in
understanding the difference between - or better, when I should use - perror("...") or fprintf(stderr, "...").
Calling perror will give you the interpreted value of errno, which is a thread-local error value written to by POSIX syscalls (i.e., every thread has it's own value for errno). For instance, if you made a call to open(), and there was an error generated (i.e., it returned -1), you could then call perror immediately afterwards to see what the actual error was. Keep in mind that if you call other syscalls in the meantime, then the value in errno will be written over, and calling perror won't be of any use in diagnosing your issue if an error was generated by an earlier syscall.
fprintf(stderr, ...) on the other-hand can be used to print your own custom error messages. By printing to stderr, you avoid your error reporting output being mixed with "normal" output that should be going to stdout.
Keep in mind that fprintf(stderr, "%s\n", strerror(errno)) is similar to perror(NULL) since a call to strerror(errno) will generate the printed string value for errno, and you can then combined that with any other custom error message via fprintf.
They do rather different things.
You use perror() to print a message to stderr that corresponds to errno. You use fprintf() to print anything to stderr, or any other stream. perror() is a very specialized printing function:
perror(str);
is equivalent to
if (str)
fprintf(stderr, "%s: %s\n", str, strerror(errno));
else
fprintf(stderr, "%s\n", strerror(errno));
perror(const char *s): prints the string you give it followed by a string that describes the current value of errno.
stderr: it's an output stream used to pipe your own error messages to (defaults to the terminal).
Relevant:
char *strerror(int errnum): give it an error number, and it'll return the associated error string.
perror() always writes to stderr;
strerr(), used together with fprintf(), can write to any output - including stderr but not exclusively.
fprintf(stdout, "Error: %s", strerror(errno));
fprintf(stderr, "Error: %s", strerror(errno)); // which is equivalent to perror("Error")
Furthermore, perror imposes its own text formating "text: error description"
Perror function take more time to perform execution call goes from user space to kernal space wheras fprintf calls goest to api to kernal

Does printf always flush the buffer on encountering a newline?

My machine is running ubuntu 10.10, and I'm using the standard gnu C library. I was under the impression that printf flushed the buffer if there was a newline described in the format string, however the following code repeatedly seemed to buck that trend. Could someone clarify why the buffer is not being flushed.
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/wait.h>
int main()
{
int rc;
close(1);
close(2);
printf("HI 1\n");
fprintf(stderr, "ERROR\n");
open("newfile.txt", O_WRONLY | O_CREAT | O_TRUNC, 0600);
printf("WHAT?\n");
fprintf(stderr, "I SAID ERROR\n");
rc = fork();
if (rc == 0)
{
printf("SAY AGAIN?\n");
fprintf(stderr, "ERROR ERROR\n");
}
else
{
wait(NULL);
}
printf("BYE\n");
fprintf(stderr, "HI 2\n");
return 0;
}
The contents of newfile.txt after running this program is as follows.
HI 1
WHAT?
SAY AGAIN?
BYE
HI 1
WHAT?
BYE
No, the standard says that stdout is initially fully buffered if the output device can be determined to be a non-interactive one.
It means that, if you redirect stdout to a file, it won't flush on newline. If you want to try and force it to line-buffered, use setbuf or setvbuf.
The relevant part of C99, 7.19.3 Files, paragraph 7, states:
At program startup, three text streams are predefined and need not be opened explicitly - standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
Just keep in mind section 5.1.2.3/6:
What constitutes an interactive device is implementation-defined.
It is flushed if the output device is an interactive one e.g., a terminal.
You have to flush the output buffer in case the output device can be determined to be non-interactive e.g., a file. New line does not do that automatically.
For details see paxdiablo's answer.
You've got a strange sense of humor. :)
int main()
{
int rc;
close(1);
close(2);
printf("HI 1\n");
fprintf(stderr, "ERROR\n");
You close the filedescriptors used for stdout and stderr, and then immediately try to use the C stdout and stderr FILE streams. Not a great idea, I'm not sure what the C library will do to report the error to you but crashing would be one acceptable possibility.
That oddity aside, when you're using the standard IO stream functions to write, the buffering depends in part upon the destination. If you're writing to a terminal, then usual behavior is line buffering. If you're writing to a pipe, a file, or a socket, then the usual behavior is block buffering. You can change the buffering behavior with the setvbuf(3) function. Full details of the buffering behavior are in the manpage.

Resources