Does printf always flush the buffer on encountering a newline? - c

My machine is running ubuntu 10.10, and I'm using the standard gnu C library. I was under the impression that printf flushed the buffer if there was a newline described in the format string, however the following code repeatedly seemed to buck that trend. Could someone clarify why the buffer is not being flushed.
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/wait.h>
int main()
{
int rc;
close(1);
close(2);
printf("HI 1\n");
fprintf(stderr, "ERROR\n");
open("newfile.txt", O_WRONLY | O_CREAT | O_TRUNC, 0600);
printf("WHAT?\n");
fprintf(stderr, "I SAID ERROR\n");
rc = fork();
if (rc == 0)
{
printf("SAY AGAIN?\n");
fprintf(stderr, "ERROR ERROR\n");
}
else
{
wait(NULL);
}
printf("BYE\n");
fprintf(stderr, "HI 2\n");
return 0;
}
The contents of newfile.txt after running this program is as follows.
HI 1
WHAT?
SAY AGAIN?
BYE
HI 1
WHAT?
BYE

No, the standard says that stdout is initially fully buffered if the output device can be determined to be a non-interactive one.
It means that, if you redirect stdout to a file, it won't flush on newline. If you want to try and force it to line-buffered, use setbuf or setvbuf.
The relevant part of C99, 7.19.3 Files, paragraph 7, states:
At program startup, three text streams are predefined and need not be opened explicitly - standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
Just keep in mind section 5.1.2.3/6:
What constitutes an interactive device is implementation-defined.

It is flushed if the output device is an interactive one e.g., a terminal.
You have to flush the output buffer in case the output device can be determined to be non-interactive e.g., a file. New line does not do that automatically.
For details see paxdiablo's answer.

You've got a strange sense of humor. :)
int main()
{
int rc;
close(1);
close(2);
printf("HI 1\n");
fprintf(stderr, "ERROR\n");
You close the filedescriptors used for stdout and stderr, and then immediately try to use the C stdout and stderr FILE streams. Not a great idea, I'm not sure what the C library will do to report the error to you but crashing would be one acceptable possibility.
That oddity aside, when you're using the standard IO stream functions to write, the buffering depends in part upon the destination. If you're writing to a terminal, then usual behavior is line buffering. If you're writing to a pipe, a file, or a socket, then the usual behavior is block buffering. You can change the buffering behavior with the setvbuf(3) function. Full details of the buffering behavior are in the manpage.

Related

what will happen calling printf after close stdout?

I tried below code, and screen showed nothing.
close(STDOUT_FILENO);
printf("Child output something\n");
is it just can not find the stdout,then abort the data?
I want to find wether printf write some data, since I can not print the return value so I output it to some file.
close(STDOUT_FILENO);
int res = printf("output something\n");
open("./log.output", O_CREAT|O_WRONLY|O_TRUNC, S_IRWXU);
printf("%d", res); // return 17
So printf work, but I don't know where it write to.
The reason you're seeing this result has to do with buffering. In general, a file which is attached to a terminal is line buffered and all other files are block buffered. stderr is unbuffered.
When you close stdout, it's no longer attached to a terminal, so it's block buffered, not line buffered. You've attempted to write fewer bytes than the buffer size (which is usually some multiple of 512), so printf happily copied it to the buffer and did nothing else. If you wrote a suitable amount of data using printf, you'd find that it did indeed fail at that point.
You can verify a similar behavior by calling fflush(stdout):
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main(void)
{
close(STDOUT_FILENO);
int res = printf("output something\n");
fprintf(stderr, "%d\n", res);
res = fflush(stdout);
fprintf(stderr, "%d %s\n", res, strerror(errno));
}
The last line will output -1 Bad file descriptor, which shows that the attempt to write out to stdout failed with EBADF, as expected. If you need to verify that data has been written, you must call fflush or fsync as appropriate.
Note that in general, you don't want to close any of the three default file descriptors, because any time you open a new file descriptor, it will use the lowest unused number and take the place of one of the standard streams. If a separate part of your program attempts to write to one of those streams without checking, it can write into an unexpected file, corrupting it. The safe thing to do is redirect those streams to /dev/null instead.
Your open call for log.output does exactly the thing I just mentioned in that it opens file descriptor 1 (stdout) again.

An issue with zenity and printf (C stdio) [duplicate]

The behaviour of printf() seems to depend on the location of stdout.
If stdout is sent to the console, then printf() is line-buffered and is flushed after a newline is printed.
If stdout is redirected to a file, the buffer is not flushed unless fflush() is called.
Moreover, if printf() is used before stdout is redirected to file, subsequent writes (to the file) are line-buffered and are flushed after newline.
When is stdout line-buffered, and when does fflush() need to be called?
Minimal example of each:
void RedirectStdout2File(const char* log_path) {
int fd = open(log_path, O_RDWR|O_APPEND|O_CREAT,S_IRWXU|S_IRWXG|S_IRWXO);
dup2(fd,STDOUT_FILENO);
if (fd != STDOUT_FILENO) close(fd);
}
int main_1(int argc, char* argv[]) {
/* Case 1: stdout is line-buffered when run from console */
printf("No redirect; printed immediately\n");
sleep(10);
}
int main_2a(int argc, char* argv[]) {
/* Case 2a: stdout is not line-buffered when redirected to file */
RedirectStdout2File(argv[0]);
printf("Will not go to file!\n");
RedirectStdout2File("/dev/null");
}
int main_2b(int argc, char* argv[]) {
/* Case 2b: flushing stdout does send output to file */
RedirectStdout2File(argv[0]);
printf("Will go to file if flushed\n");
fflush(stdout);
RedirectStdout2File("/dev/null");
}
int main_3(int argc, char* argv[]) {
/* Case 3: printf before redirect; printf is line-buffered after */
printf("Before redirect\n");
RedirectStdout2File(argv[0]);
printf("Does go to file!\n");
RedirectStdout2File("/dev/null");
}
Flushing for stdout is determined by its buffering behaviour. The buffering can be set to three modes: _IOFBF (full buffering: waits until fflush() if possible), _IOLBF (line buffering: newline triggers automatic flush), and _IONBF (direct write always used). "Support for these characteristics is implementation-defined, and may be affected via the setbuf() and setvbuf() functions." [C99:7.19.3.3]
"At program startup, three text streams are predefined and need not be opened explicitly
— standard input (for reading conventional input), standard output (for writing
conventional output), and standard error (for writing diagnostic output). As initially
opened, the standard error stream is not fully buffered; the standard input and standard
output streams are fully buffered if and only if the stream can be determined not to refer
to an interactive device." [C99:7.19.3.7]
Explanation of observed behaviour
So, what happens is that the implementation does something platform-specific to decide whether stdout is going to be line-buffered. In most libc implementations, this test is done when the stream is first used.
Behaviour #1 is easily explained: when the stream is for an interactive device, it is line-buffered, and the printf() is flushed automatically.
Case #2 is also now expected: when we redirect to a file, the stream is fully buffered and will not be flushed except with fflush(), unless you write gobloads of data to it.
Finally, we understand case #3 too for implementations that only perform the check on the underlying fd once. Because we forced stdout's buffer to be initialised in the first printf(), stdout acquired the line-buffered mode. When we swap out the fd to go to file, it's still line-buffered, so the data is flushed automatically.
Some actual implementations
Each libc has latitude in how it interprets these requirements, since C99 doesn't specify what an "interactive device" is, nor does POSIX's stdio entry extend this (beyond requiring stderr to be open for reading).
Glibc. See filedoalloc.c:L111. Here we use stat() to test if the fd is a tty, and set the buffering mode accordingly. (This is called from fileops.c.) stdout initially has a null buffer, and it's allocated on the first use of the stream based on the characteristics of fd 1.
BSD libc. Very similar, but much cleaner code to follow! See this line in makebuf.c
Your are wrongly combining buffered and unbuffered IO functions. Such a combination must be done very carefully especially when the code has to be portable. (and it is bad to write unportable code...)
It is certainly best to avoid combining buffered and unbuffered IO on the same file descriptor.
Buffered IO: fprintf(), fopen(), fclose(), freopen()...
Unbuffered IO: write(), open(), close(), dup()...
When you use dup2() to redirect stdout. The function is not aware of the buffer which was filled by fprintf(). So when dup2() closes the old descriptor 1 it does not flush the buffer and the content could be flushed to a different output. In your case 2a it was sent to /dev/null.
The solution
In your case it is best to use freopen() instead of dup2(). This solves all your problems:
It flushes the buffers of the original FILE stream. (case 2a)
It sets the buffering mode according to the newly opened file. (case 3)
Here is the correct implementation of your function:
void RedirectStdout2File(const char* log_path) {
if(freopen(log_path, "a+", stdout) == NULL) err(EXIT_FAILURE, NULL);
}
Unfortunately with buffered IO you cannot directly set permissions of a newly created file. You have to use other calls to change the permissions or you can use unportable glibc extensions. See the fopen() man page.
You should not close the file descriptor, so remove close(fd) and close
stdout_bak_fd if you want the message to be print only in the file.

Multiple processes accessing the same file

Is it alright for multiple processes to access (write) to the same file at the same time? Using the following code, it seems to work, but I have my doubts.
Use case in the instance is an executable that gets called every time an email is received and logs it's output to a central file.
if (freopen(console_logfile, "a+", stdout) == NULL || freopen(error_logfile, "a+", stderr) == NULL) {
perror("freopen");
}
printf("Hello World!");
This is running on CentOS and compiled as C.
Using the C standard IO facility introduces a new layer of complexity; the file is modified solely via write(2)-family of system calls (or memory mappings, but that's not used in this case) -- the C standard IO wrappers may postpone writing to the file for a while and may not submit complete requests in one system call.
The write(2) call itself should behave well:
[...] If the file was
open(2)ed with O_APPEND, the file offset is first set to the
end of the file before writing. The adjustment of the file
offset and the write operation are performed as an atomic
step.
POSIX requires that a read(2) which can be proved to occur
after a write() has returned returns the new data. Note that
not all file systems are POSIX conforming.
Thus your underlying write(2) calls will behave properly.
For the higher-level C standard IO streams, you'll also need to take care of the buffering. The setvbuf(3) function can be used to request unbuffered output, line-buffered output, or block-buffered output. The default behavior changes from stream to stream -- if standard output and standard error are writing to the terminal, then they are line-buffered and unbuffered by default. Otherwise, block-buffering is the default.
You might wish to manually select line-buffered if your data is naturally line-oriented, to prevent interleaved data. If your data is not line-oriented, you might wish to use un-buffered or leave it block-buffered but manually flush the data whenever you've accumulated a single "unit" of output.
If you are writing more than BUFSIZ bytes at a time, your writes might become interleaved. The setvbuf(3) function can help prevent the interleaving.
It might be premature to talk about performance, but line-buffering is going to be slower than block buffering. If you're logging near the speed of the disk, you might wish to take another approach entirely to ensure your writes aren't interleaved.
This answer was incorrect. It does work:
So the race condition would be:
process 1 opens it for append, then
later process 2 opens it for append, then
later still 1 writes and closes, then
finally 2 writes and closes.
I'd be impressed if that 'worked' because it isn't clear to me what
working should mean. I assume 'working' means all of the bytes written
by the two processes are inthe log file? I'd expect that they both
write starting at the same byte offset, so one will replace the others
bytes. It will all be okay upto and including step 3. and only show up
as a problem at step 4, Seems like an easy test to write: open getchar
... write close.
Is it critical that they can have the file open simultaneously? A
more obvious solution if the write is quick, is to open exclusive.
For a quick check on your system, try:
/* write the first command line argument to a file called foo
* stackoverflow topic 9880935
*/
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main (int argc, const char * argv[]) {
if (argc <2) {
fprintf(stderr, "Error: need some text to write to the file Foo\n");
exit(1);
}
FILE* fp = freopen("foo", "a+", stdout);
if (fp == NULL) {
perror("Error failed to open file\n");
exit(1);
}
fprintf(stderr, "Press a key to continue\n");
(void) getchar(); /* Yes, I really mean to ignore the character */
if (printf("%s\n", argv[1]) < 0) {
perror("Error failed to write to file: ");
exit(1);
}
fclose(fp);
return 0;
}

Whats wrong with this print order

have a look at this code:
#include<stdio.h>
#include <unistd.h>
int main()
{
int pipefd[2],n;
char buf[100];
if(pipe(pipefd)<0)
printf("Pipe error");
printf("\nRead fd:%d write fd:%d\n",pipefd[0],pipefd[1]);
if(write(pipefd[1],"Hello Dude!\n",12)!=12)
printf("Write error");
if((n=read(pipefd[0],buf,sizeof(buf)))<=0)
printf("Read error");
write(1,buf,n);
return 0;
}
I expect the printf to print Read fd and write fd before Hello Dude is read from the pipe. But thats not the case... see here. When i tried the same program in our college computer lab my output was
Read fd:3 write fd:4
Hello Dude!
also few of our friends observed that, changing the printf statement to contain more number of \n characters changed the output order... for example..printf("\nRead fd:%d\n write fd:%d\n",pipefd[0],pipefd[1]); meant that Read fd is printed then the message Hello Dude! then the write fd is printed. What is this behaviour??
Note: Out lab uses a linux server on which we run terminals, i don't remember the compiler version though.
It's because printf to the standard output stream is buffered but write to the standard output file descriptor is not.
That means the behaviour can change based on what sort of buffering you have. In C, standard output is line buffered if it can be determined to be connected to an interactive device. Otherwise it's fully buffered (see here for a treatise on why this is so).
Line buffered means it will flush to the file descriptor when it sees a newline. Fully buffered means it will only flush when the buffer fills (for example, 4K worth of data), or when the stream is closed (or when you fflush).
When you run it interactively, the flush happens before the write because printf encounters the \n and flushes automatically.
However, when you run it otherwise (such as by redirecting output to a file or in an online compiler/executor where it would probably do the very same thing to capture data for presentation), the flush happens after the write (because printf is not flushing after every line).
In fact, you don't need all that pipe stuff in there to see this in action, as per the following program:
#include <stdio.h>
#include <unistd.h>
int main (void) {
printf ("Hello\n");
write (1, "Goodbye\n", 8);
return 0;
}
When I execute myprog ; echo === ; myprog >myprog.out ; cat myprog.out, I get:
Hello
Goodbye
===
Goodbye
Hello
and you can see the difference that the different types of buffering makes.
If you want line buffering regardless of redirection, you can try:
setvbuf (stdin, NULL, _IOLBF, BUFSIZ);
early on in your program - it's implementation defined whether an implementation supports this so it may have no effect but I've not seen many where it doesn't work.
You shouldn't mix calls to write and printf on single file descriptor. Change write to fwrite.
Functions which use FILE are buffered. Functions which use file descriptors are not. This is why you may get mixed order.
You can also try calling fflush before write.
When you write onto the same file, or pipe, or whatever by two means at once (direct IO and output stream) you can get this behaviour. The reason is that the output stream is buffered.
With fflush() you can control that behaviour.
What is happening is that printf writes to stdout in a buffered way -- the string is kept in a buffer before being output -- while the 'write' later on writes to stdout unbuffered. This can have the effect that the output from 'write' appears first if the buffer from the printf is only flushed later on.
You can explicitly flush using fflush() -- but even better would be not to mix buffered and non-buffered writes to the same output. Type man printf, man fflush, man fwrite etc. on your terminal to learn more about what these commands do exactly.

Does reading from stdin flush stdout?

stdout is line-buffered when connected to a terminal, but I remember reading somewhere that reading (at least from stdin) will automatically flush stdout. All C implementations that I have used have done this, but I can't find it in the standard now.
It does make sense that it works that way, otherwise code like this:
printf("Type some input: ");
fgets(line, sizeof line, stdin);
would need an extra fflush(stdout);
So is stdout guaranteed to be flushed here?
EDIT:
As several replies have said, there seems to be no guarantee in the standard that the output to stdout in my example will appear before the read from stdin, but on the other hand, this intent is stated in (my free draft copy of) the standard:
The input and output dynamics of
interactive devices shall take place
as specified in 7.19.3. The intent of
these requirements is that unbuffered
or line-buffered output appear as soon
as possible, to ensure that prompting
messages actually appear prior to a
program waiting for input.
(ISO/IEC 9899:TC2 Committee Draft -- May 6, 2005, page 14).
So it seems that there is no guarantee, but it will probably work in most implementations anyway. (Famous last words...)
No, it does not.
To answer your question, you do need the extra fflush(stdout); after your printf() call to make sure the prompt appears before your program tries to read input. Reading from stdin doesn't fflush(stdout); for you.
No. You need to fflush(stdout); Many implementations will flush at every newline of they are sending output to a terminal.
No. stdin/stdout are buffered. You need to explicity fflush(stdout) in order for the buffered data in the video memory/unix terminal's memory to be pushed out on to a view device such as a terminal. The buffering of the data can be set by calling setvbuf.
Edit: Thanks Jonathan, to answer the question, reading from stdin does not flush stdout. I may have gone off a tangent here by specifying the code demonstrating how to use setvbuf.
#include
int main(void)
{
FILE *input, *output;
char bufr[512];
input = fopen("file.in", "r+b");
output = fopen("file.out", "w");
/* set up input stream for minimal disk access,
using our own character buffer */
if (setvbuf(input, bufr, _IOFBF, 512) != 0)
printf("failed to set up buffer for input file\n");
else
printf("buffer set up for input file\n");
/* set up output stream for line buffering using space that
will be obtained through an indirect call to malloc */
if (setvbuf(output, NULL, _IOLBF, 132) != 0)
printf("failed to set up buffer for output file\n");
else
printf("buffer set up for output file\n");
/* perform file I/O here */
/* close files */
fclose(input);
fclose(output);
return 0;
}
Hope this helps,
Best regards,
Tom.
No, that's not part of the standard. It's certainly possible that you've used a library implementation where the behavior you described did happen, but that's a non-standard extension that you shouldn't rely on.
No. Watch out for inter-process deadlocks when dealing with std streams when either read on stdin or write on stdout blocks.

Resources