When I was reading about the usage of setvbuf() , I came across the _IONBF(no buffering) mode. So I was curious how stdin and stdout will be affected if I try to disable the buffering. Below is an example code :
The Code :
#include <stdio.h>
int main(void)
{
int num;
char a;
setvbuf(stdin, NULL, _IONBF, 0); //turn off buffering
scanf("%d", &num);
a = getchar();
printf("%d %c\n", num , a);
return 0;
}
The Question :
1.) From the above code, the sample input I've given to the program (123a and etc) yield the same output even if I didn't include setvbuf().
2.) I understand that buffer is an intermediate storage in which a chunk of data can be filled into it and all those data will be send to the input or output stream either when the buffer is full or a newline is given.
3.)So what does the effect of disabling buffer? Is it in terms of performance?
It is partly performance and partly control over how stream library functions (fread, fgets, fprintf, etc.) relate to actual I/O to a device/file.
For example, stream output to a character device (e. g. your terminal) are, by default, line buffered. The effect of this is that the following code,
printf("start ");
sleep(10);
printf("stop\n");
will wait 10 seconds and then print start stop[NL]. The first print was buffered because there was no new-line to flush the buffer. To get start to print, then sleep 10 seconds,you could either add a fflush call before the sleep call, or turn off buffering on stdout with setvbuf.
Stream output to a block device or disk file is, by default, fully buffered. This means that the buffer won't flush until either you overflow the buffer or do an fflush. This could be a problem with files, for example, if you want to monitor the output in real-time with tail -f. If you know that this monitoring may be done, you could switch the stream to line-buffering so that every time a new-line is printed, the buffer is flushed to the file. This would be at the cost of increased overhead as disk blocks are written several times as new-lines are printed. (Note: this overhead depends on how the file system is mounted. A fixed drive, mounted write-back cache, will have less overhead as the OS buffers writes to the disk, vs. a removable drive mounted write-though. In the latter case, the OS will try to do the partial writes to improve the chances of avoiding data loss if the drive is removed without dismounting.)
Related
I am going through "C PRIMER PLUS" and there is this topic about "OUTPUT FLUSHING".
Now it says:
printf() statements sends output to an intermediate storage called buffer.
Every now and then, the material in the buffer is sent to the screen. The
standard C rules for when output is sent from the buffer to the screen are
clear:
It is sent when the buffer gets full.
When a newline character is encountered.
When there is impending input.
(Sending the output from the buffer to the screen or file is called flushing
the buffer.)
Now, To verify the above statements. I wrote this simple program :
#include<stdio.h>
int main(int argc, char** argv) {
printf("Hello World");
return 0;
}
so, neither the printf() contains a new line, nor it has some impending input(for e.g. a scanf() statement or any other input statement). Then why does it print the contents on the output screen.
Let's suppose first condition validated to true. The buffer got full(Which can't happen at all).
Keeping that in mind, I truncated the statement inside printf() to
printf("Hi");
Still it prints the statement on the console.
So whats the deal here, All of the above conditions are false but still I'm getting the output on screen.
Can you elaborate please. It appears I'm making a mistake in understanding the concept. Any help is highly appreciated.
EDIT: As suggested by a very useful comment, that maybe the execution of exit() function after the end of program is causing all the buffers to flush, resulting in the output on the console. But then if we hold the screen before the execution of exit(). Like this,
#include<stdio.h>
int main(int argc, char** argv) {
printf("Hello World!");
getchar();
return 0;
}
It still outputs on the console.
Output buffering is an optimization technique. Writing data to some devices (hard disks f.e.) is an expensive operation; that's why the buffering appeared. In essence, it avoids writing data byte-by-byte (or char-by-char) and collects it in a buffer in order to write several KiB of data at once.
Being an optimization, output buffering must be transparent to the user (it is transparent even to the program). It must not affect the behaviour of the program; with or without buffering (or with different sizes of the buffer), the program must behave the same. This is what the rules you mentioned are for.
A buffer is just an area in memory where the data to be written is temporarily stored until enough data accumulates to make the actual writing process to the device efficient. Some devices (hard disk etc.) do not even allow writing (or reading) data in small pieces but only in blocks of some fixed size.
The rules of buffer flushing:
It is sent when the buffer gets full.
This is obvious. The buffer is full, its purpose was fulfilled, let's push the data forward to the device. Also, probably there is more data to come from the program, we need to make room for it.
When a newline character is encountered.
There are two types of devices: line-mode and block-mode. This rule applies only to the line-mode devices (the terminal, for example). It doesn't make much sense to flush the buffer on newlines when writing to disk. But it makes a lot of sense to do it when the program is writing to the terminal. In front of the terminal there is the user waiting impatiently for output. Don't let them wait too much.
But why output to terminal needs buffering? Writing on the terminal is not expensive. That's correct, when the terminal is physically located near the processor. Not also when the terminal and the processor are half the globe apart and the user runs the program through a remote connection.
When there is impending input.
It should read "when there is impeding input on the same device" to make it clear.
Reading is also buffered for the same reason as writing: efficiency. The reading code uses its own buffer. It fills the buffer when needed then scanf() and the other input-reading functions get their data from the input buffer.
When an input is about to happen on the same device, the buffer must be flushed (the data actually written to the device) in order to ensure consistency. The program has send some data to the output and now it expects to read back the same data; that's why the data must be flushed to the device in order for the reading code find it there and load it.
But why the buffers are flushed when the application exits?
Err... buffering is transparent, it must not affect the application behaviour. Your application has sent some data to the output. The data must be there (on the output device) when the application quits.
The buffers are also flushed when the associated files are closed, for the same reason. And this is what happens when the application exits: the cleanup code close all the open files (standard input and output are just files from the application point of view), closing forces flushing the buffers.
Part of the specification for exit() in the C standard (POSIX link given) is:
Next, all open streams with unwritten buffered data are flushed, all open streams are closed, …
So, when the program exits, pending output is flushed, regardless of newlines, etc. Similarly, when the file is closed (fclose()), pending output is written:
Any unwritten buffered data for the stream are delivered to the host environment to be written to the file; any unread buffered data are discarded.
And, of course, the fflush() function flushes the output.
The rules quoted in the question are not wholly accurate.
When the buffer is full — this is correct.
When a newline is encountered — this is not correct, though it often applies. If the output device is an 'interactive device', then line buffering is the default. However, if the output device is 'non-interactive' (disk file, a pipe, etc), then the output is not necessarily (or usually) line-buffered.
When there is impending input — this too is not correct, though it is commonly the way it works. Again, it depends on whether the input and output devices are 'interactive'.
The output buffering mode can be modified by calling setvbuf()
to set no buffering, line buffering or full buffering.
The standard says (§7.21.3):
¶3 When a stream is unbuffered, characters are intended to appear from the source or at the destination as soon as possible. Otherwise characters may be accumulated and transmitted to or from the host environment as a block. When a stream is fully buffered, characters are intended to be transmitted to or from the host environment as a block when a buffer is filled. When a stream is line buffered, characters are intended to be transmitted to or from the host environment as a block when a new-line character is encountered. Furthermore, characters are intended to be transmitted as a block to the host environment when a buffer is filled, when input is requested on an unbuffered stream, or when input is requested on a line buffered stream that requires the transmission of characters from the host environment. Support for these characteristics is implementation-defined, and may be affected via the setbuf and setvbuf functions.
…
¶7 At program startup, three text streams are predefined and need not be opened explicitly — standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
Also, §5.1.2.3 Program execution says:
The input and output dynamics of interactive devices shall take place as specified in 7.21.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.
The strange behavior of printf, buffering can be explained with below simple C code. please read through entire thing execute and understand as the below is not obvious (bit tricky)
#include <stdio.h>
int main()
{
int a=0,b=0,c=0;
printf ("Enter two numbers");
while (1)
{
sleep (1000);
}
scanf("%d%d",&b,&c);
a=b+c;
printf("The sum is %d",a);
return 1;
}
EXPERIMENT #1:
Action: Compile and Run above code
Observations:
The expected output is
Enter two numbers
But this output is not seen
EXPERIMENT #2:
Action: Move Scanf statement above while loop.
#include <stdio.h>
int main()
{
int a=0,b=0,c=0;
printf ("Enter two numbers");
scanf("%d%d",&b,&c);
while (1)
{
sleep (1000);
}
a=b+c;
printf("The sum is %d",a);
return 1;
}
Observations: Now the output is printed (reason below in the end)(just by scanf position change)
EXPERIMENT #3:
Action: Now add \n to print statement as below
#include <stdio.h>
int main()
{
int a=0,b=0,c=0;
printf ("Enter two numbers\n");
while (1)
{
sleep (1000);
}
scanf("%d%d",&b,&c);
a=b+c;
printf("The sum is %d",a);
return 1;
}
Observation: The output Enter two numbers is seen (after adding \n)
EXPERIMENT #4:
Action: Now remove \n from the printf line, comment out while loop, scanf line, addition line, printf line for printing result
#include <stdio.h>
int main()
{
int a=0,b=0,c=0;
printf ("Enter two numbers");
// while (1)
// {
// sleep (1000);
// }
// scanf("%d%d",&b,&c);
// a=b+c;
// printf("The sum is %d",a);
return 1;
}
Observations: The line "Enter two numbers" is printed to screen.
ANSWER:
The reason behind the strange behavior is described in Richard Stevens book.
PRINTF PRINTS TO SCREEN WHEN
The job of printf is to write output to stdout buffer. kernel flushes output buffers when
kernel need to read something in from input buffer. (EXPERIMENT #2)
when it encounters newline (since stdout is by default set to
linebuffered)(EXPERIMENT #3)
after program exits (all output buffers are flushed) (EXPERIMENT #4)
By default stdout set to line buffering
so printf will not print as the line did not end.
if it is no buffered, all lines are output as is.
Full buffered then, only when buffer is full it is flushed.
I have a C program that writes into 3 lines every 10ms into stdout. If I redirect the output to a file (using > ) there will be long delays (60ms) in the running of the program. The delays are periodic (say every 5 seconds).
If I just let it write to console or redirect to /dev/null, there is no problem.
I suspected that this is the stdout buffer problem, but using fflush(stdout) didn't solve the problem.
How can I solve the issue?
If I redirect the output to a file (using > ) there will be long
delays (60ms) in the running of the program.
That's because when stdout is a terminal device, it is usually (although not required) line-buffered, that is, the output buffer is flushed when a newline character is written, whereas in the case of regular files, output is fully buffered, meaning the buffers are flushed either when they are full or you close the file (or you explicitly call fflush(), of course).
fflush(stdout) may not be enough for you because that only flushes the standard I/O library buffers, but the kernel also buffers and delays writes to disk. You can call fsync() on the file descriptor to flush the modified buffer cache pages to disk after calling fflush(), as in fsync(STDOUT_FILENO).
Be careful and don't call fsync() without calling fflush() before.
UPDATE: You can also try sync(), which, unlike fsync(), does not block waiting for the underlying writes to return. Or, as suggested in another answer, fdatasync() may be a good choice because it avoids the overhead of updating file times.
You need to use fsync. The following:
fsync(fileno(stdout))
Should help. Note that the Linux kernel will still buffer limit I/O according to its internal scheduler limits. Running as root and setting a very low nice value might make a difference, if you're not getting the frequency you want.
If it's still too slow, try using fdatasync instead. Every fflush and fsync causes the filesystem to update node metadata (filesize, access time, etc) as well as the actual data itself. If you know in blocks how much data you'll be writing, then you can try the following trick:
#define _XOPEN_SOURCE 500
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main(int argc, char **argv){
FILE *fp = fopen("test.txt", "w");
char *line = "Test\n";
char *fill = "\0";
fwrite(fill, 1, 100*strlen(line), fp);
fflush(fp);
fsync(fileno(fp));
rewind(fp);
for (int i = 0; i < 100; i++){
fwrite(line, strlen(line), 1, fp);
fflush(fp);
fdatasync(fileno(fp));
}
}
The first fwrite call writes 5*100 zeros to the file in one chunk, and fsyncs so that it's written to disk and the node information is updated. Now we can write up to 500 bytes to the file without trashing filesystem metadata. rewind(3) returns the file pointer position to the beginning of the file so we can write over the data without changing the filesize of the node.
Timing that program gives the following:
$ time ./fdatasync
./fdatasync 0.00s user 0.01s system 1% cpu 0.913 total
So it ran fdatasync and sync'ed to disk 100 times in 0.913 seconds, which averages out to ~9ms per write & fdatasync call.
it could be just that every 5seconds you are filling up your disk buffer and there is a spike in the latency due to flushing to actual disk.check with iostat
I'm reading Advanced Programming in the UNIX Environment, 3rd Edition and misunderstanding a section in it (page 145, Section 5.4 Buffering, Chapter 5).
Line buffering comes with two caveats. First, the size of the buffer that the
standard I/O library uses to collect each line is fixed, so I/O might take place if
we fill this buffer before writing a newline. Second, whenever input is
requested through the standard I/O library from either (a) an unbuffered stream or (b) a line-buffered stream (that requires data to be requested from the kernel),
all line-buffered output streams are flushed. The reason for the qualifier on (b)
is that the requested data may already be in the buffer, which doesn’t require
data to be read from the kernel. Obviously, any input from an unbuffered
stream, item (a), requires data to be obtained from the kernel.
I can't get the bold lines. My English isn't good. So, could you clarify it for me? Maybe in an easier way. Thanks.
The point behind the machinations described is to ensure that prompts appear before the system goes into a mode where it is waiting for input.
If an input stream is unbuffered, every time the standard I/O library needs data, it has to go to the kernel for some information. (That's the last sentence.) That's because the standard I/O library does not buffer any data, so when it needs more data, it has to read from the kernel. (I think that even an unbuffered stream might buffer one character of data, because it would need to read up to a space character, for example, to detect when it has reached the end of a %s format string; it has to put back (ungetc()) the extra character it read so that the next time it needs a character, there is the character it put back. But it never needs more than the one character of buffering.)
If an input stream is line buffered, there may already be some data in its input buffer, in which case it may not need to go to the kernel for more data. In that case, it might not flush anything. This can occur if the scanf() format requested "%s" and you typed hello world; it would read the whole line, but the first scan would stop after hello, and the next scanf() would not need to go to the kernel for the world word because it is already in the buffer.
However, if there isn't any data in the buffer, it has to ask the kernel to read the data, and it ensures that any line-buffered output streams are flushed so that if you write:
printf("Enter name: ");
if (scanf("%63s", name) != 1)
…handle error or EOF…
then the prompt (Enter name:) appears. However, if you'd previously typed hello world and previously read just hello, then the prompt wouldn't necessarily appear because the world was already waiting in the (line buffered) input stream.
This may explain the point.
Let's imagine that you have a pipe in your program and you use it for communication between different parts of your program (single thread program writing and reading from this single pipe).
If you write to the writing end of the pipe, say the letter 'A', and then call the read operation to read from the reading end of the pipe. You would expect that the letter 'A' is read. However, read operation is a system call to the kernel. To be able to return the letter 'A' it must be written to the kernel first. This means that the writing of 'A' must be flushed, otherwise it would stay in your local writing buffer and your program would be locked forever.
In consequence, before calling a read operation all write buffers are flushed. This is what the section (b) says.
The size of the buffer that the standard I/O library is using to collect each line is fixed.
with the help of the fgets function we are getting the line continuously, during that time it will read the content with the specified buffer size or up to newline.
Second, whenever input is requested through the standard I/O library, it can use an unbuffered stream or line-buffered stream.
unbuffered stream - It will not buffer the character, flush the character regularly.
line-buffered - It will store the character into the buffer and then flush when the operation is completed.
lets take without using \n we are going to print the content in printf statement, that time it will buffer all the content until we flush or printing with new line. Like that when the operation is completed the stream buffer is flushed internally.
(b) is that the requested data may already be in the buffer, which doesn't require data to be read from the kernel
In line oriented stream the requested buffer may already in the buffer because the data can be buffered, so we can't required data to read from the kernel once again.
(a) requires data to be obtained from the kernel.
Any input from unbuffered stream item, a data to be get from the kernel due to the unbuffered stream can't store anything in the buffer.
int main()
{
printf("Hello"); // doesn't display anything on the screen
printf("\n"); // hello is display on the screen
return 0;
}
All characters(candidate of printing) are buffered until a new line is received? Correct?
Q1 - Why does it wait before printing on terminal until a newline char?
Q2 - Where are the characters of first printf (i.e. "Hello") buffered?
Q3 - What is the flow of printing printf()->puts()->putchar() -> now where? driver? Does the driver has a control to wait until \n?
Q4 - What is the role stdout that is attached to a process?
Looking for a in-depth picture. Feel free to edit the question, if something doesn't makes sense.
printf is not writing directly to the screen, instead it writes to the output stream, which is by default buffered. The reason for this is, that there may not even be a screen attached and the output can go to a file as well. For performance reasons, it is better for a system if access to disc is buffered and then executed in one step with appropriately sized chunks, rather than writing every time.
You can even change the size of the buffer and set it to 0, which means that all output goes directly to the target, which may be usefull for logging purposes.
setbuf(stdout, NULL);
The buffer is flushed either when it is full, or if certain criterions are fullfilled, like printing a newline. So when you would execute the printf in a loop, you would notice that it will write out in chunks unless you have a newline inbetween.
I'll start with some definitions and then go on to answer your questions.
File: It is an ordered sequence of bytes. It can be a disk file, a stream of bytes generated by a program (such as a pipeline), a TCP/IP socket, a stream of bytes received from or sent to a peripheral device (such as the keyboard or the display) etc. The latter two are interactive files. Files are typically the principal means by which a program communicates with its environment.
Stream: It is a representation of flow of data from one place to another, e.g., from disk to memory, memory to disk, one program to another etc. A stream is a source of data where data can be put into (write) or taken data out of (read). Thus, it's an interface for writing data into or reading data from a file which can be any type as stated above. Before you can perform any operation on a file, the file must be opened. Opening a file associates it with a stream. Streams are represented by FILE data type defined in stdio.h header. A FILE object (it's a structure) holds all of the internal state information about the connection to the associated file, including such things as the file position indicator and buffering information. FILE objects are allocated and managed internally by the input/output library functions and you should not try to create your own objects of FILE type, the library does it for us. The programs should deal only with pointers to these objects (FILE *) rather than the objects themselves.
Buffer: Buffer is a block of memory which belongs to a stream and is used to hold stream data temporarily. When the first I/O operation occurs on a file, malloc is called and a buffer is obtained. Characters that are written to a stream are normally accumulated in the buffer (before being transmitted to the file in chunks), instead of appearing as soon as they are output by the application program. Similarly, streams retrieve input from the host environment in blocks rather than on a character-by-character basis. This is done to increase efficiency, as file and console I/O is slow in comparison to memory operations.
The C library provides three predefined text streams (FILE *) open and available for use at program start-up. These are stdin (the standard input stream, which is the normal source of input for the program), stdout (the standard output stream, which is used for normal output from the program), and stderr (the standard error stream, which is used for error messages and diagnostics issued by the program). Whether these streams are buffered or unbuffered is implementation-defined and not required by the standard.
GCC provides three types of buffering - unbuffered, block buffered, and line buffered. Unbuffered means that characters appear on the destination file as soon as written (for an output stream), or input is read from a file on a character-by-character basis instead of reading in blocks (for input streams). Block buffered means that characters are saved up in the buffer and written or read as a block. Line buffered means that characters are saved up only till a newline is written into or read from the buffer.
stdin and stdout are block buffered if and only if they can be determined not to refer to an interactive device else they are line buffered (this is true of any stream). stderr is always unbuffered by default.
The standard library provides functions to alter the default behaviour of streams. You can use fflush to force the data out of the output stream buffer (fflush is undefined for input streams). You can make the stream unbuffered using the setbuf function.
Now, let's come to your questions.
Unmarked question: Yes, becausestdout normally refers to a display terminal unless you have output redirection using > operator.
Q1: It waits because stdout is newline buffered when it refers to a terminal.
Q2: The characters are buffered, well, in the buffer allocated to the stdout stream.
Q3: Flow of the printing is: memory --> stdout buffer --> display terminal. There are kernel buffers as well controlled by the OS which the data pass through before appearing on the terminal.
Q4: stdout refers to the standard output stream which is usually a terminal.
Finally, here's a sample code to experiment things before I finish my answer.
#include <stdio.h>
#include <limits.h>
int main(void) {
// setbuf(stdout, NULL); // make stdout unbuffered
printf("Hello, World!"); // no newline
// printf("Hello, World!"); // with a newline
// only for demonstrating that stdout is line buffered
for(size_t i = 0; i < UINT_MAX; i++)
; // null statement
printf("\n"); // flush the buffer
return 0;
}
Yes, by default, standard output is line buffered when it's connected to a terminal. The buffer is managed by the operating system, normally you don't have to worry about it.
You can change this behavior using setbuf() or setvbuf(), for example, to change it to no buffer:
setbuf(stdout, NULL);
All the functions of printf, puts, putchar outputs to the standard output, so they use the same buffer.
If you wish, you can flush out the characters before the new line by calling
fflush(stdout);
This can be handy if you're slowly printing something like a progress bar where each character gets printed without a newline.
int main()
{
printf("Hello"); // Doesn't display anything on the screen
fflush(stdout); // Now, hello appears on the screen
printf("\n"); // The new line gets printed
return 0;
}
Below is sample code for using fflush():
#include <string.h>
#include <stdio.h>
#include <conio.h>
#include <io.h>
void flush(FILE *stream);
int main(void)
{
FILE *stream;
char msg[] = "This is a test";
/* create a file */
stream = fopen("DUMMY.FIL", "w");
/* write some data to the file */
fwrite(msg, strlen(msg), 1, stream);
clrscr();
printf("Press any key to flush DUMMY.FIL:");
getch();
/* flush the data to DUMMY.FIL without closing it */
flush(stream);
printf("\nFile was flushed, Press any key to quit:");
getch();
return 0;
}
void flush(FILE *stream)
{
int duphandle;
/* flush the stream's internal buffer */
fflush(stream);
/* make a duplicate file handle */
duphandle = dup(fileno(stream));
/* close the duplicate handle to flush the DOS buffer */
close(duphandle);
}
All I know about fflush() is that it is a library function used to flush an output buffer. I want to know what is the basic purpose of using fflush(), and where can I use it. And mainly I am interested in knowing what problems can there be with using fflush().
It's a little hard to say what "can be problems with" (excessive?) use of fflush. All kinds of things can be, or become, problems, depending on your goals and approaches. Probably a better way to look at this is what the intent of fflush is.
The first thing to consider is that fflush is defined only on output streams. An output stream collects "things to write to a file" into a large(ish) buffer, and then writes that buffer to the file. The point of this collecting-up-and-writing-later is to improve speed/efficiency, in two ways:
On modern OSes, there's some penalty for crossing the user/kernel protection boundary (the system has to change some protection information in the CPU, etc). If you make a large number of OS-level write calls, you pay that penalty for each one. If you collect up, say, 8192 or so individual writes into one large buffer and then make one call, you remove most of that overhead.
On many modern OSes, each OS write call will try to optimize file performance in some way, e.g., by discovering that you've extended a short file to a longer one, and it would be good to move the disk block from point A on the disk to point B on the disk, so that the longer data can fit contiguously. (On older OSes, this is a separate "defragmentation" step you might run manually. You can think of this as the modern OS doing dynamic, instantaneous defragmentation.) If you were to write, say, 500 bytes, and then another 200, and then 700, and so on, it will do a lot of this work; but if you make one big call with, say, 8192 bytes, the OS can allocate a large block once, and put everything there and not have to re-defragment later.
So, the folks who provide your C library and its stdio stream implementation do whatever is appropriate on your OS to find a "reasonably optimal" block size, and to collect up all output into chunk of that size. (The numbers 4096, 8192, 16384, and 65536 often, today, tend to be good ones, but it really depends on the OS, and sometimes the underlying file system as well. Note that "bigger" is not always "better": streaming data in chunks of four gigabytes at a time will probably perform worse than doing it in chunks of 64 Kbytes, for instance.)
But this creates a problem. Suppose you're writing to a file, such as a log file with date-and-time stamps and messages, and your code is going to keep writing to that file later, but right now, it wants to suspend for a while and let a log-analyzer read the current contents of the log file. One option is to use fclose to close the log file, then fopen to open it again in order to append more data later. It's more efficient, though, to push any pending log messages to the underlying OS file, but keep the file open. That's what fflush does.
Buffering also creates another problem. Suppose your code has some bug, and it sometimes crashes but you're not sure if it's about to crash. And suppose you've written something and it's very important that this data get out to the underlying file system. You can call fflush to push the data through to the OS, before calling your potentially-bad code that might crash. (Sometimes this is good for debugging.)
Or, suppose you're on a Unix-like system, and have a fork system call. This call duplicates the entire user-space (makes a clone of the original process). The stdio buffers are in user space, so the clone has the same buffered-up-but-not-yet-written data that the original process had, at the time of the fork call. Here again, one way to solve the problem is to use fflush to push buffered data out just before doing the fork. If everything is out before the fork, there's nothing to duplicate; the fresh clone won't ever attempt to write the buffered-up data, as it no longer exists.
The more fflush-es you add, the more you're defeating the original idea of collecting up large chunks of data. That is, you are making a tradeoff: large chunks are more efficient, but are causing some other problem, so you make the decision: "be less efficient here, to solve a problem more important than mere efficiency". You call fflush.
Sometimes the problem is simply "debug the software". In that case, instead of repeatedly calling fflush, you can use functions like setbuf and setvbuf to alter the buffering behavior of a stdio stream. This is more convenient (fewer, or even no, code changes required—you can control the set-buffering call with a flag) than adding a lot of fflush calls, so that could be considered a "problem with use (or excessive-use) of fflush".
Well, #torek's answer is almost perfect, but there's one point which is not so accurate.
The first thing to consider is that fflush is defined only on output
streams.
According to man fflush, fflush can also be used in input streams:
For output streams, fflush() forces a write of all user-space
buffered data for the given output or update stream via the stream's
underlying write function. For
input streams, fflush() discards any buffered data that has been fetched from the underlying file, but has not been consumed by
the application. The open status of
the stream is unaffected.
So, when used in input, fflush just discard it.
Here is a demo to illustrate it:
#include<stdio.h>
#define MAXLINE 1024
int main(void) {
char buf[MAXLINE];
printf("prompt: ");
while (fgets(buf, MAXLINE, stdin) != NULL)
fflush(stdin);
if (fputs(buf, stdout) == EOF)
printf("output err");
exit(0);
}
fflush() empties the buffers related to the stream. if you e.g. let a user input some data in a very shot timespan (milliseconds) and write some stuff into a file, the writing and reading buffers may have some "reststuff" remaining in themselves. you call fflush() then to empty all the buffers and force standard outputs to be sure the next input you get is what the user pressed then.
reference: http://www.cplusplus.com/reference/cstdio/fflush/