stdio to terminal after close(STDOUT_FILENO) behavior - c

I am wondering why uncommenting that first printf statement in the following program changes its subsequent behavior:
#include <unistd.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
int main() {
//printf("hi from C \n");
// Close underlying file descriptor:
close(STDOUT_FILENO);
if (write(STDOUT_FILENO, "Direct write\n", 13) != 13) // immediate error detected.
fprintf(stderr, "Error on write after close(STDOUT_FILENO): %s\n", strerror(errno));
// printf() calls continue fine, ferror(stdout) = 0 (but no write to terminal):
int rtn;
if ((rtn = printf("printf after close(STDOUT_FILENO)\n")) < 0 || ferror(stdout))
fprintf(stderr, "Error on printf after close(STDOUT_FILENO)\n");
fprintf(stderr, "printf returned %d\n", rtn);
// Only on fflush is error detected:
if (fflush(stdout) || ferror(stdout))
fprintf(stderr, "Error on fflush(stdout): %s\n", strerror(errno));
}
Without that first printf, the subsequent printf rtns 34 as if no error occured even though the connection from the stdout user buffer to the underlying fd has been closed. Only on a manual fflush(stdout) does the error get reported back.
But with that first printf turned on, the next printf reports errors as I would expect.
Of course nothing is written to the terminal(by printf) after the STDOUT_FILENO fd has been closed in either case.
I know it's silly to close(STDOUT_FILENO) in the first place here; this is an experiment I stumbled into and thinking someone more knowledgeable in these areas may see something instructive to us in it..
I am on Linux with gcc.

If you strace the both programs, it seems that stdio works so that upon first write, it checks the descriptor with fstat to find out what kind of file is connected to the stdout - if it is a terminal, then stdout shall be line-buffered, if it is something else, then stdout will be made block-buffered. If you call close(1); before the first printf, now the initial fstat will return EBADF and as 1 is not a file descriptor that points to a character device, stdout is made block-buffered.
On my computer the buffer size is 8192 bytes - that many bytes can be buffered to be written to stdout before the first failure would occur.
If you uncomment the first printf, the fstat(1, ...) succeeds and Glibc detects that stdout is connected to a terminal; stdout is set to line-buffered, and thus because printf after close(STDOUT_FILENO)\n ends with newline, the buffer will be flushed right away - which will result in an immediate error.

Related

intricacies/understanding the stdio buffer and dup2

I am reading this lecture and found this following code sample which I modified to this:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
int main()
{
int fd;
char *s, *t;
off_t ret;
fd = open("file6", O_WRONLY | O_CREAT | O_TRUNC, 0666);
if (dup2(fd, 1) < 0) { perror("dup2"); exit(1); }
printf("Standard output now goes to file6\n");
s = "before close\n";
write(1, s, strlen(s));
close(fd);
printf("It goes even after we closed file descriptor %d\n", fd);
printf("%ld\t"
"%ld\n",
(long int) lseek(fd,0,SEEK_CUR),
(long int) lseek(1,0,SEEK_CUR));
s = "And fwrite\n";
fwrite(s, sizeof(char), strlen(s), stdout);
printf("%ld\t"
"%ld\n",
(long int) lseek(fd,0,SEEK_CUR),
(long int) lseek(STDOUT_FILENO,0,SEEK_CUR));
fflush(stdout);
s = "And write\n";
write(1, s, strlen(s));
printf("after:\tAnd wri...: lseek(fd,0,SEEK_CUR)=%ld\t"
"lseek(STDOUT_FILENO,0,SEEK_CUR)=%ld\n",
(long int) lseek(fd,0,SEEK_CUR),
(long int) lseek(STDOUT_FILENO,0,SEEK_CUR));
return 0;
}
I am sharing two different outputs with the only change in the code being that the line fflush(stdout) is commented out in first and present in the second run.
Output (with fflush(stdout) commented):
before close
And write
Standard output now goes to file4
It goes even after we closed file descriptor 3
-1 13
And fwrite
-1 13
after: And wri...: lseek(fd,0,SEEK_CUR)=-1 lseek(STDOUT_FILENO,0,SEEK_CUR)=23
Output with flush(stdout) uncommented:
before close
Standard output now goes to file4
It goes even after we closed file descriptor 3
-1 13
And fwrite
-1 13
And write
after: And wri...: lseek(fd,0,SEEK_CUR)=-1 lseek(STDOUT_FILENO,0,SEEK_CUR)=127
I have two questions:
Why does "And write appears" first when fflush(stdout) is commented?
Why lseek prints -1 which I checked separately is an error message corresponding to errno ESPIPE. I am aware that lseek on terminal results in an error. But my current understanding is that since the standard output is dup2 to file6, then, this error shouldn't arise? Shouldn't it (lseek(STDOUT_FILENO, 0, SEEK_CUR)) simply return the current lseek pointer in file6, if dup2 is successful?
Why does "And write" appear first when fflush(stdout) is commented?
Because the C stdio buffers haven't filled, so nothing written using stdio APIs is actually sent to the output until the buffers fill, the stdio handle is flushed, or the program ends. Your direct write calls (e.g. for "And write") bypass stdio buffers entirely, and get written immediately, all the buffered stuff doesn't appear until the program ends (or at least, not until after "And write" has already been written).
Why lseek prints -1?
The first lseek was called on fd, which you closed shortly after dup2ing it over STDOUT_FILENO/1, so it fails. If you checked the errno properly (zeroing errno before each lseek, calling the two lseeks separately and storing or printing their errors and errnos separately, so one of them doesn't override the errno of the other before you even see it), you'd see it has a value corresponding to EBADF, not ESPIPE. The second lseek on (STDOUT_FILENO) works just fine. A mildly modified version of your code (using stderr so you can see the output for the last couple outputs even when you can't read the actual file, carefully zeroing errno each time, printing it before calling lseek again, and using strerror to show a friendly description of the errno) shows this clearly: Try it online!

No output in the parent process without fflush(stdout)

I'm trying to understand what is behind this behaviour in my parent process.
Basically, I create a child process and connect its stdout to my pipe. The parent process continuously reads from the pipe and does some stuff.
I noticed that when inserting the while loop in the parent the stdout seems to be lost, nothing appears on the terminal etc I thought that the output of stdout would somehow go to the pipe (maybe an issue with dup2) but that doesn't seem to be the issue. If I don't continuously fflush(stdout) in the parent process, whatever I'm trying to get to the terminal just won't show. Without a while loop in the parent it works fine, but I'm really not sure why it's happening or if the rest of my implementation is problematic somehow.
Nothing past the read system call seems to be going to the stdout in the parent process. Assuming the output of inotifywait in the pipe is small enough ( 30 > bytes ), what exactly is wrong with this program?
What I expect to happen is the stdout of inotifywait to go to the pipe, then for the parent to read the message, run strtok and print the file name (which only appears in stdout when I fflush)
Running the program with inotify installed and creating any file in the current directory of the program should be enough. Removing the while loop does print the created file's name (as expected).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <errno.h>
int main(void) {
char b[100];
int pipefd;
if (mkfifo("fifo", 0666) == -1) {
if (errno != EEXIST) {
perror("mkfifo");
exit(EXIT_FAILURE);
}
}
pid_t pid = fork();
if (pid < 0) {
perror("fork");
exit(1);
}
if ((pipefd = open("fifo", O_RDWR)) < 0) {
perror("open pipe");
exit(EXIT_FAILURE);
}
if (pid == 0) {
dup2(pipefd, 1);
const char* dir = ".";
const char* args[] = {"inotifywait", dir, "-m", "-e",
"create", "-e", "moved_to", NULL};
execvp("inotifywait", (char**)args);
perror("inotifywait");
} else {
while (1) {
fflush(stdout); // the output only appears in stdout with this here
if (read(pipefd, b, 30) < 0) {
perror("problem # read");
exit(1);
}
char filename[30];
printf("anything");
sscanf(b, "./ CREATE %s", filename);
printf("%s", filename);
}
}
}
The streams used by the C standard library are designed in such a way that they are normally buffered (except for the standard error stream stderr).
The standard output stream is normally line buffered, unless the output device is not an interactive device, in which case it is normally fully buffered. Therefore, in your case, it is probably line buffered.
This means that the buffer will only be flushed
when it is full,
when an \n character is encountered,
when the stream is closed (e.g. during normal program termination),
when reading input from an unbuffered or line-buffered stream (in certain situations), or
when you explicitly call fflush.
This explains why you are not seeing the output, because none of the above are happening in your infinite loop (when you don't call fflush). Although you are reading input, you are not doing this from a C standard library FILE * stream. Instead, you are bypassing the C runtime library (e.g. glibc) by using the read system call directly (i.e. you are using a file descriptor instead of a stream).
The simplest solution to your problem would probably be to replace the line
printf("%s", filename);
with:
printf("%s\n", filename);
If stdout is line-buffered (which should be the case if it is connected to a terminal), then the input should automatically be flushed after every line and an explicit call to fflush should no longer be necessary.

what will happen calling printf after close stdout?

I tried below code, and screen showed nothing.
close(STDOUT_FILENO);
printf("Child output something\n");
is it just can not find the stdout,then abort the data?
I want to find wether printf write some data, since I can not print the return value so I output it to some file.
close(STDOUT_FILENO);
int res = printf("output something\n");
open("./log.output", O_CREAT|O_WRONLY|O_TRUNC, S_IRWXU);
printf("%d", res); // return 17
So printf work, but I don't know where it write to.
The reason you're seeing this result has to do with buffering. In general, a file which is attached to a terminal is line buffered and all other files are block buffered. stderr is unbuffered.
When you close stdout, it's no longer attached to a terminal, so it's block buffered, not line buffered. You've attempted to write fewer bytes than the buffer size (which is usually some multiple of 512), so printf happily copied it to the buffer and did nothing else. If you wrote a suitable amount of data using printf, you'd find that it did indeed fail at that point.
You can verify a similar behavior by calling fflush(stdout):
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main(void)
{
close(STDOUT_FILENO);
int res = printf("output something\n");
fprintf(stderr, "%d\n", res);
res = fflush(stdout);
fprintf(stderr, "%d %s\n", res, strerror(errno));
}
The last line will output -1 Bad file descriptor, which shows that the attempt to write out to stdout failed with EBADF, as expected. If you need to verify that data has been written, you must call fflush or fsync as appropriate.
Note that in general, you don't want to close any of the three default file descriptors, because any time you open a new file descriptor, it will use the lowest unused number and take the place of one of the standard streams. If a separate part of your program attempts to write to one of those streams without checking, it can write into an unexpected file, corrupting it. The safe thing to do is redirect those streams to /dev/null instead.
Your open call for log.output does exactly the thing I just mentioned in that it opens file descriptor 1 (stdout) again.

Difference between perror() and printf()

I had read that both perror() and printf() write to the terminal screen. But perror() writes to stderr while printf() writes to stdout. So, to print errors why is perror() used when printf() can do it.
printf() cannot write to stderr. fprintf() can. perror() always does.
There is no requirement that writing to either stdout or stderr writes to a terminal screen - that is up to the implementation (since not all systems even have a terminal). There is also no requirement that writing to stdout and stderr results in writing to the same device (e.g. one can be redirected to a file, while the other is redirected to a pipe).
perror() will be implemented with built-in knowledge of the meanings of error codes, represented by the static errno, which is used by various functions in the standard library to report error conditions. The meanings of particular values are implementation defined (i.e. they vary between compilers and libraries).
Because there could be configurations where you want stderr printed to the console but the other output not printed at all (for example, to remove verbosity). In other cases you may need to redirect stderr to write to a file, this is useful when you are in production and that file can be used to understand what went wrong on a remote computer you can't debug yourself.
In general, you gain more control on how console outputs are treated depending on their type.
See this answer to understand how you can do stream redirection in code.
Or, see this link on how you can force stream redirection to file or ignore a stream on an already compiled program (while invoking it in bash)
In addition to other answers, you might use fprintf(3) on stderr and errno(3) with strerror(3) like
fprintf(stderr, "something wrong: %s\n", strerror(errno));
On GNU libc systems (many Linux systems), you could use instead %m conversion specifier instead:
fprintf(stderr, "something wrong: %m\n");
You conventionally should output error messages to stderr (see stderr(3)); see also syslog(3) to use system logging.
Don't forget to end the format string with \n since stderr is often line buffered (but sometimes not) or else use fflush(3)
For example, you might want to show both the error, the filename and the current directory on fopen failure:
char* filename = somefilepath();
assert (filename != NULL);
FILE* f = fopen(filename, "r");
if (!f) {
int e = errno; // keep errno, it could be later overwritten
if (filename[0] == '/') /// absolute path
fprintf(stderr, "failed to open %s : %s\n", filename, strerror(e));
else { // we also try to show the current directory since relative path
char dirbuf[128];
memset (dirbuf, 0, sizeof(dirbuf));
if (getcwd(dirbuf, sizeof(dirbuf)-1))
fprintf(stderr, "failed to open %s in %s : %s\n",
filename, dirbuf, sterror(e));
else // unlikely case when getcwd failed so errno overwritten
fprintf(stderr, "failed to open %s here : %s\n",
filename, sterror(e));
};
exit(EXIT_FAILURE); // in all cases when fopen failed
}
Remember that errno could be overwritten by many failures (so we store it in e, in the unlikely case that getcwd fails and overwrite errno).
If your program is a deamon (e.g. has called daemon(3)) you'll better use system log (i.e. call openlog(3) after calling daemon) since daemon can redirect stderr to /dev/null
There are three standard stream stdin stdout stderr. You can refer to know what is important of different stream.
For error messages and diagnostics ,stderr is used , to print on stderr
Perror is used. printf can not do that. Perror is also used to handle errors from system call
fd = open (pathname, flags, mode);
if (fd == -1) {
perror("open");
exit(EXIT_FAILURE);
}
You can refer more about this in book The linux programming interface
void perror(const char *s)
Perror prints message in following sequence :
argument of s , a colon , a space , a short message concerning error whose error code currently in errnoand newline
In standard C if s is null pointer than only message will be printed . other things will be ignored
To understand more you can also refer page 332 of The complete reference C
A big advantage of using perror():
It is sometimes very useful to redirect stdout into /dev/null to only have access to errors since the verbosity of stdout might hide the errors that we need to fix.
perror
The general purpose of the function is to halt the execution process due to an error. The error message produced by perror is platform-depend. You can also print your own error message also.
printf
The general purpose of the function is to print message user defined and continue the execution.

Does printf always flush the buffer on encountering a newline?

My machine is running ubuntu 10.10, and I'm using the standard gnu C library. I was under the impression that printf flushed the buffer if there was a newline described in the format string, however the following code repeatedly seemed to buck that trend. Could someone clarify why the buffer is not being flushed.
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/wait.h>
int main()
{
int rc;
close(1);
close(2);
printf("HI 1\n");
fprintf(stderr, "ERROR\n");
open("newfile.txt", O_WRONLY | O_CREAT | O_TRUNC, 0600);
printf("WHAT?\n");
fprintf(stderr, "I SAID ERROR\n");
rc = fork();
if (rc == 0)
{
printf("SAY AGAIN?\n");
fprintf(stderr, "ERROR ERROR\n");
}
else
{
wait(NULL);
}
printf("BYE\n");
fprintf(stderr, "HI 2\n");
return 0;
}
The contents of newfile.txt after running this program is as follows.
HI 1
WHAT?
SAY AGAIN?
BYE
HI 1
WHAT?
BYE
No, the standard says that stdout is initially fully buffered if the output device can be determined to be a non-interactive one.
It means that, if you redirect stdout to a file, it won't flush on newline. If you want to try and force it to line-buffered, use setbuf or setvbuf.
The relevant part of C99, 7.19.3 Files, paragraph 7, states:
At program startup, three text streams are predefined and need not be opened explicitly - standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
Just keep in mind section 5.1.2.3/6:
What constitutes an interactive device is implementation-defined.
It is flushed if the output device is an interactive one e.g., a terminal.
You have to flush the output buffer in case the output device can be determined to be non-interactive e.g., a file. New line does not do that automatically.
For details see paxdiablo's answer.
You've got a strange sense of humor. :)
int main()
{
int rc;
close(1);
close(2);
printf("HI 1\n");
fprintf(stderr, "ERROR\n");
You close the filedescriptors used for stdout and stderr, and then immediately try to use the C stdout and stderr FILE streams. Not a great idea, I'm not sure what the C library will do to report the error to you but crashing would be one acceptable possibility.
That oddity aside, when you're using the standard IO stream functions to write, the buffering depends in part upon the destination. If you're writing to a terminal, then usual behavior is line buffering. If you're writing to a pipe, a file, or a socket, then the usual behavior is block buffering. You can change the buffering behavior with the setvbuf(3) function. Full details of the buffering behavior are in the manpage.

Resources