I am writing a program in C on Linux where various things will be written to stdout via printf. Naturally, I would try to minimize the IO calls and buffer all the information and then pass it to a single print call. However, through testing, I have discovered that printf does buffering of its own until it reaches a '\n'.
My question is, can I be certain that all printf implementations do this, or is glibc just optimized? Is it reliable to trust printf to do the buffering for me?
The C standard allows both unbuffered and buffered streams. The relevant part is C17 7.21.3/3:
When a stream is unbuffered, characters are intended to appear from the source or at the destination as soon as possible. Otherwise characters may be accumulated and transmitted to or from the host environment as a block. When a stream is fully buffered, characters are intended to be transmitted to or from the host environment as a block when a buffer is filled. When a stream is line buffered, characters are intended to be transmitted to or from the host environment as a block when a new-line character is encountered.
This is typically a decision depending on the OS rather than the standard library implementation. Most hosted console-based OS use the line buffered implementation where \n will "flush the buffer". Otherwise an explicit call to fflush(stdout) will always do that (and it's strictly speaking more portable).
An example of an unbuffered system is limited "bare metal" microcontroller one, where stdout is an UART and there's no hardware buffers to store a lot of characters.
Related
I know the the messages to be displayed is buffered in stdout. But consider the following code:
#include <stdio.h>
int main()
{
while(true)
{
printf("Buffered, will be flushed");
}
}
I think that the message doesn't display until the newline is reached. But in this case we have that the pack of the messages will be displayed.
Buffers aren't infinite in size. If you output more than the buffer's size, it will be flushed automatically, and you'll see a bunch of messages all at once (repeatedly here).
Here's how the C standard describes the various forms of buffering (C11 draft n1570, §7.21.3/3):
When a stream is unbuffered, characters are intended to appear from the source or at the
destination as soon as possible. Otherwise characters may be accumulated and
transmitted to or from the host environment as a block. When a stream is fully buffered,
characters are intended to be transmitted to or from the host environment as a block when
a buffer is filled. When a stream is line buffered, characters are intended to be
transmitted to or from the host environment as a block when a new-line character is
encountered. Furthermore, characters are intended to be transmitted as a block to the host
environment when a buffer is filled, when input is requested on an unbuffered stream, or
when input is requested on a line buffered stream that requires the transmission of
characters from the host environment. Support for these characteristics is
implementation-defined, and may be affected via the setbuf and setvbuf functions.
Emphasis added. POSIX has essentially the same definition, which you can browse online in the Standard I/O Streams chapter.
Standard output is usually line buffered.
I had started learning C programming, so I'm a beginner, while learning about standard streams of text, I came up with the lines "stdout" stream is buffered while "stderr" stream is not buffered, but I am not able to make sense with this lines.
I already have read about "buffer" on this forum and I like candy analogy, but I am not able to figure out what is meant when one says: "This stream is buffered and the other one is not." What is the effect?
What is the difference?
Update: Does it affect the speed of processing?
Buffer is a block of memory which belongs to a stream and is used to hold stream data temporarily. When the first I/O operation occurs on a file, malloc is called and a buffer is obtained. Characters that are written to a stream are normally accumulated in the buffer (before being transmitted to the file in chunks), instead of appearing as soon as they are output by the application program. Similarly, streams retrieve input from the host environment in blocks rather than on a character-by-character basis. This is done to increase efficiency, as file and console I/O is slow in comparison to memory operations.
GCC provides three types of buffering - unbuffered, block buffered, and line buffered. Unbuffered means that characters appear on the destination file as soon as written (for an output stream), or input is read from a file on a character-by-character basis instead of reading in blocks (for input streams). Block buffered means that characters are saved up in the buffer and written or read as a block. Line buffered means that characters are saved up only till a newline is written into or read from the buffer.
stdin and stdout are block buffered if and only if they can be determined not to refer to an interactive device else they are line buffered (this is true of any stream). stderr is always unbuffered by default.
The standard library provides functions to alter the default behaviour of streams. You can use fflush to force the data out of the output stream buffer (fflush is undefined for input streams). You can make the stream unbuffered using the setbuf function.
Buffering is collecting up many elements before writing them, or reading many elements at once before processing them. Lots of information out there on the Internet, for example, this
and other SO questions like this
EDIT in response to the question update: And yes, it's done for performance reasons. Writing and reading from disks etc will in any case write or read a 'block' of some sort for most devices, and there's a fair overhead in doing so. So batching these operations up can make for a dramatic performance difference
A program writing to buffered output can perform the output in the time it takes to write to the buffer which is typically very fast, independent of the speed of the output device which may be slow.
With buffered output the information is queues and a separate process deals with the output rendering.
With unbuffered output, the data is written directly to the output device, so runs at the speed on the device. This is important for error output because if the output were buffered it would be possible for the process to fail before the buffered output has made it to the display - so the program might terminate with no diagnostic output.
#include <stdio.h>
int main()
{
int age;
printf("Enter Your age: ");
scanf("%lf", &age);
printf("Your age is %d\n", age);
return 0;
}
If we run the program the prompt will be displayed on the terminal screen, even though there is no newline character at the end of the prompt. How can this happen?
That most likely happens because your C standard library chooses to flush the output buffers in its implementation of scanf. That's a good idea because
this is a common problem and you usually want the prompt at the same line as the cursor and
it's not a performance problem because waiting for user input is way slower than flushing the buffer anyways
You can run an experiment:
printf("Enter a number: "); /* the prompt */
sleep(10); /* Add this. */
scanf("%lf", &number);
You'll see the prompt popping up after 10 seconds. It's not documented - that I know of - but it appears in this case scanf causes stdout to be flushed.
You shouldn't count on this behavior however and you should fflush(stdout) when printing a prompt string.
As noted in a comment, the C standard (ISO/IEC 9899:2011) says:
§5.1.2.3 Program execution
¶6 The least requirements on a conforming implementation are:
Accesses to volatile objects are evaluated strictly according to the rules of the abstract
machine.
At program termination, all data written into files shall be identical to the result that
execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place as specified in
7.21.3. The intent of these requirements is that unbuffered or line-buffered output
appear as soon as possible, to ensure that prompting messages actually appear prior to
a program waiting for input.
This is the observable behavior of the program.
¶7 What constitutes an interactive device is implementation-defined.
¶8 More stringent correspondences between abstract and actual semantics may be defined by
each implementation.
And the library section describing the functions in <stdio.h> says:
§7.21.3 Files
¶3 When a stream is unbuffered, characters are intended to appear from the source or at the
destination as soon as possible. Otherwise characters may be accumulated and
transmitted to or from the host environment as a block. When a stream is fully buffered,
characters are intended to be transmitted to or from the host environment as a block when
a buffer is filled. When a stream is line buffered, characters are intended to be
transmitted to or from the host environment as a block when a new-line character is
encountered. Furthermore, characters are intended to be transmitted as a block to the host
environment when a buffer is filled, when input is requested on an unbuffered stream, or
when input is requested on a line buffered stream that requires the transmission of
characters from the host environment. Support for these characteristics is
implementation-defined, and may be affected via the setbuf and setvbuf functions.
¶7 At program startup, three text streams are predefined and need not be opened explicitly
— standard input (for reading conventional input), standard output (for writing
conventional output), and standard error (for writing diagnostic output). As initially
opened, the standard error stream is not fully buffered; the standard input and standard
output streams are fully buffered if and only if the stream can be determined not to refer
to an interactive device.
Note the 'intent' comment in §5.1.2.3 ¶6.
The specification in §7.21.3 ¶7 means that standard error is either unbuffered or line buffered at program startup, and standard input and standard output are either unbuffered or line buffered (usually line buffered) unless the output is not an interactive device. On Unix, disk files, pipes, FIFOs, sockets (amongst others) are not interactive. Usually tty and pty devices are deemed interactive.
So, if you run your program with output going to a pipe (e.g. to cat or less or more), you probably* won't see a prompt, but if you type the number, you'll probably get the prompt followed by the first line of output with a newline all on a single line (and the number you typed won't appear). If you redirect the input from a file, you won't see the entered value at all. If you redirect to cat, you'll likely see your input on one line, then the prompt and the output. If you redirect to more, you may not see your input at all (separately from the response printing). If you redirect to less, you may get all sorts of interesting screen effects, and still not see the input you type.
You also need to fix the revised code. The %lf format is not appropriate for reading an integer (it is a hangover from a previous version of the code).
* Since the behaviour is implementation defined, what actually happens depends on your implementation. An implementation might do the equivalent of fflush(0) before any input, for example; this would send the prompt to the device. But no implementation is required to do that, and most aren't that thorough (partly because such thoroughness has a runtime cost associated with it).
Today I learned that stdout is line buffered when it's set to terminal and buffered in different cases. So, in normal situation, if I use printf() without the terminating '\n' it will be printed on the screen only when the buffer will be full. How to get a size of this buffer, how big is this?
The actual size is defined by the individual implementation; the standard doesn't mandate a minimum size (based on what I've been able to find, anyway). Don't have a clue on how you'd determine the size of the buffer.
Edit
Chapter and verse:
7.19.3 Files
...
3 When a stream is unbuffered, characters are intended to appear from the source or at the
destination as soon as possible. Otherwise characters may be accumulated and
transmitted to or from the host environment as a block. When a stream is fully buffered,
characters are intended to be transmitted to or from the host environment as a block when
a buffer is filled. When a stream is line buffered, characters are intended to be
transmitted to or from the host environment as a block when a new-line character is
encountered. Furthermore, characters are intended to be transmitted as a block to the host
environment when a buffer is filled, when input is requested on an unbuffered stream, or
when input is requested on a line buffered stream that requires the transmission of
characters from the host environment. Support for these characteristics is
implementation-defined, and may be affected via the setbuf and setvbuf functions.
Emphasis added.
"Implementation-defined" is not a euphemism for "I don't know", it's simply a statement that the language standard explicitly leaves it up to the implementation to define the behavior.
And having said that, there is a non-programmatic way to find out; consult the documentation for your compiler. "Implementation-defined" also means that the implementation must document the behavior:
3.4.1
1 implementation-defined behavior
unspecified behavior where each implementation documents how the choice is made
2 EXAMPLE An example of implementation-defined behavior is the propagation of the high-order bit
when a signed integer is shifted right.
The Linux when a pipe is created for default pipe size 64K is used.
In /proc/sys/fs/pipe-max-size the maximum pipe size exists.
For the default 1048576 is typical.
For glibc's default file buffer; 65536 bytes seems reasonable.
However, ascertained by grep from the glibc source tree:
libio/libio.h:#define _IO_BUFSIZ _G_BUFSIZ
sysdeps/generic/_G_config.h:#define _G_BUFSIZ 8192
sysdeps/unix/sysv/linux/_G_config.h:#define _G_BUFSIZ 8192
By that the original question might or might not be answered.
For a minute's effort the best guess is 8 kilobytes.
For mere line buffering 8K is adequate.
However, for more than line buffered output
as compared with 64K; 8K is not efficient.
Because for the default pipe size 64K is used and
if a larger pipe size is not expected and
if a larger pipe size is not explicitly set
then for a stdio buffer 64K is recommended.
If performance is required
then meager 8K buffers do not suffice.
By fcntl(pipefd,F_SETPIPE_SZ,1048576)
a pipe's size can be increased.
By setvbuf (stdout,buffer,_IOFBF,1048576)
a stdio provided file buffer can be replaced.
If a pipe is not used
then pipe size is irrelevant.
However, if between two processes data is piped
then by increasing pipe size a performance boon could become.
Otherwise
by the smallest buffer or
by the smallest pipe
a bottleneck is created.
If reading also
then by a larger buffer
by stdio fewer read function invocations might be required.
By the word "might" an important consideration is suggested.
As by provided
by a single write function invocation
by a single read function invocation
as much data can be read.
By a read function invocation
a return with fewer bytes than requested can be expected.
By an additional read function invocation
additional bytes may be gained.
For writing a data line; by stdio overkill is provided.
However, by stdio line buffered output is possible.
In some scenarios line buffered output is essential.
If writing to a proc virtual file system provided file or
if writing to a sys virtual file system provided file
then in a single write buffer
the line feed byte should be included.
If a second write is used
then an unexpected outcome could become.
If read write and stdio are mixed
then caveats exist.
Before
a write function invocation
a fflush function invocation is required.
Because stderr is not buffered;
for stderr the fflush function invocation is not required.
By read fewer than expected bytes might be provided.
By stdio the previous bytes might already be buffered.
Not mixing unistd and stdio I/O is good advise, but often ignored.
Mixing buffered input is unreasonable.
Mixing unbuffered input is possible.
Mixing buffered output is plausible.
By stdio buffered IO convenience is provided.
Without stdio buffered IO is possible.
However, for the code additional bytes are required.
When a sufficient sized buffer is leveraged;
compared with stdio provided output functions;
the write function invocation is not necessarily slower.
However, when a pipe is not involved
then by function mmap superior IO can be provided.
On a pipe by mmap an error is not returned.
However, in the address space the data is not provided.
On a pipe by lseek an error is provided.
Lastly by man 3 setvbuf a good example is provided.
If on the stack the buffer is allocated
then before a return a fclose function invocation
must not be omitted.
The actual question was
"In C, what's the size of stdout buffer?"
By 8192 that much might be answered.
By those who encounter this inquiry
curiosity concerning buffer input/output efficiency might exist.
By some inquiries the goal is implicitly approached.
By a preference for terse replies
the pipe size significance and
the buffer size significance and
mmap is not explicated.
This reply explicates.
here are some pretty interesting answers on a similar question.
on a linux system you can view buffer sizes from different functions, including ulimit.
Also the header files limits.h and pipe.h should contain that kind of info.
You could set it to unbuffered, or just flush it.
This seems to have some decent info when the C runtime typically flushes it for you and some examples. Take a look at this.
Could anyone clarify on the types of buffers used by a program?
For eg:
I have a C program that reads from a stdin to stdout.
What are the buffers involved here? I'm aware that there are 2.
One provided by the kernel on which a user don't have any control.
One provided with standard streams namely stdout, stdin and stderr. Each having a separate buffer.
Is my understanding correct?
Thanks,
John
If you are working on linux/unix then you could more easily understand that there are three streams namely
1.STDIN: FILE DESCRIPTOR VALUE 0 (IN unix)
2.STDOUT :FILE DESCRIPTOR VALUE 1
3.STDERR :FILE DESCRIPTOR VALUE 2
By default these streams correspond to keyboard and monitor.In unix we can change these streams to read input from file instead of keyboard.To display output on a file rather than monitor using close(),dup() system calls.Yes there are 3 buffers involved.To clear the contents of input buffer in c we use fflush() function.
If you want to know more about handling these streams in UNIX then let me Know.
The kernel (or other underlying system) could have any number of layers of buffering, depending on what device is being read from and the details of the kernel implementation; in some systems there is no buffering at that level, with the data being read directly into the userspace buffer.
The stdio library allocates a buffer for stdin; the size is implementation-dependent but you can control the size and even use your own buffer with setvbuf. It also allows you to control whether I/O is fully buffered (as much data is read into the buffer as is available), line buffered (data is is only read until a newline is encountered), or unbuffered. The default is line buffering if the system can determine that the input is a terminal, else fully buffered.
The story is similar for stdout. stderr is by default unbuffered.