Long delay hiccups for logging stdout to file - c

I have a C program that writes into 3 lines every 10ms into stdout. If I redirect the output to a file (using > ) there will be long delays (60ms) in the running of the program. The delays are periodic (say every 5 seconds).
If I just let it write to console or redirect to /dev/null, there is no problem.
I suspected that this is the stdout buffer problem, but using fflush(stdout) didn't solve the problem.
How can I solve the issue?

If I redirect the output to a file (using > ) there will be long
delays (60ms) in the running of the program.
That's because when stdout is a terminal device, it is usually (although not required) line-buffered, that is, the output buffer is flushed when a newline character is written, whereas in the case of regular files, output is fully buffered, meaning the buffers are flushed either when they are full or you close the file (or you explicitly call fflush(), of course).
fflush(stdout) may not be enough for you because that only flushes the standard I/O library buffers, but the kernel also buffers and delays writes to disk. You can call fsync() on the file descriptor to flush the modified buffer cache pages to disk after calling fflush(), as in fsync(STDOUT_FILENO).
Be careful and don't call fsync() without calling fflush() before.
UPDATE: You can also try sync(), which, unlike fsync(), does not block waiting for the underlying writes to return. Or, as suggested in another answer, fdatasync() may be a good choice because it avoids the overhead of updating file times.

You need to use fsync. The following:
fsync(fileno(stdout))
Should help. Note that the Linux kernel will still buffer limit I/O according to its internal scheduler limits. Running as root and setting a very low nice value might make a difference, if you're not getting the frequency you want.
If it's still too slow, try using fdatasync instead. Every fflush and fsync causes the filesystem to update node metadata (filesize, access time, etc) as well as the actual data itself. If you know in blocks how much data you'll be writing, then you can try the following trick:
#define _XOPEN_SOURCE 500
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main(int argc, char **argv){
FILE *fp = fopen("test.txt", "w");
char *line = "Test\n";
char *fill = "\0";
fwrite(fill, 1, 100*strlen(line), fp);
fflush(fp);
fsync(fileno(fp));
rewind(fp);
for (int i = 0; i < 100; i++){
fwrite(line, strlen(line), 1, fp);
fflush(fp);
fdatasync(fileno(fp));
}
}
The first fwrite call writes 5*100 zeros to the file in one chunk, and fsyncs so that it's written to disk and the node information is updated. Now we can write up to 500 bytes to the file without trashing filesystem metadata. rewind(3) returns the file pointer position to the beginning of the file so we can write over the data without changing the filesize of the node.
Timing that program gives the following:
$ time ./fdatasync
./fdatasync 0.00s user 0.01s system 1% cpu 0.913 total
So it ran fdatasync and sync'ed to disk 100 times in 0.913 seconds, which averages out to ~9ms per write & fdatasync call.

it could be just that every 5seconds you are filling up your disk buffer and there is a spike in the latency due to flushing to actual disk.check with iostat

Related

When is the kernel buffer cache empty for Disk I/O?

When is the kernel buffer cache empty? This does not seem to be LINE Buffering. If I write () a string without a newline character, it is immediately output to the file.
In addition, does the input and output buffers of the socket file also use the kernel buffer cache like Disk I / O? Also, does the input and output buffers of the kernel space used for read() and write() exist for each open file (fd)?
#include <stdio.h>
#include <string.h>
#include <sys/fcntl.h>
#include <unistd.h>
int main()
{
int fd = open("text", O_RDWR | O_CREAT);
write(fd, "message", strlen("message"));
// I can check the string in the file without fsync(fd).
sleep(30);
close(fd);
return 0;
}
When is page cache bypassed?
Page cache shall be bypassed using direct I/O, provided that
opened with O_DIRECT flag
certain offset/address alignment constraint is met
no extending writes performed
See this link for more information.
(I'm assuming Linux for the answers below)
When is the kernel buffer cache empty?
You would need more context for this to be answerable. Additionally as you seem to be making files in a filesystem I'll refer to the kernel cache being used as the page cache, see the "What is the major difference between the buffer cache and the page cache?" quora question for the difference. For example, a write can be in the kernel page cache but not have made its way to do disk (i.e. dirty) or it can be in BOTH the page cache AND on disk (i.e. it got written out but the kernel is choosing to hold on to it in RAM). Do you mean "made clean" or do you mean "entirely discarded from the page cache"? Or do you mean "when is the I/O done visible to other programs working on the same file"?
This does not seem to be LINE Buffering
At the C library level there's a difference between I/O done on streams (which can be line buffered) and low-level I/O done on file descriptors. Your example is using file descriptors so there would never be line buffering. Further, C library buffering is orthogonal to kernel buffering.
does the input and output buffers of the socket file also use the kernel buffer cache like Disk I / O?
Sockets don't use the page cache as they aren't block or file backed. However, socket I/O IS buffered using sk_buff in the kernel.
Also, does the input and output buffers of the kernel space used for read() and write() exist for each open file (fd)?
Sorry, I don't understand the question. The page cache is shared for files/block devices so multiple file descriptors to the same file will be serviced by the same entries in the page cache (assuming they are requesting identical offsets).
(ETOOMANYQUESTIONS! #andoryu please can you limit one question per post? It's tough going for someone trying to answer otherwise. Thanks!)

Disable buffering for stdin and stdout using setvbuf()

When I was reading about the usage of setvbuf() , I came across the _IONBF(no buffering) mode. So I was curious how stdin and stdout will be affected if I try to disable the buffering. Below is an example code :
The Code :
#include <stdio.h>
int main(void)
{
int num;
char a;
setvbuf(stdin, NULL, _IONBF, 0); //turn off buffering
scanf("%d", &num);
a = getchar();
printf("%d %c\n", num , a);
return 0;
}
The Question :
1.) From the above code, the sample input I've given to the program (123a and etc) yield the same output even if I didn't include setvbuf().
2.) I understand that buffer is an intermediate storage in which a chunk of data can be filled into it and all those data will be send to the input or output stream either when the buffer is full or a newline is given.
3.)So what does the effect of disabling buffer? Is it in terms of performance?
It is partly performance and partly control over how stream library functions (fread, fgets, fprintf, etc.) relate to actual I/O to a device/file.
For example, stream output to a character device (e. g. your terminal) are, by default, line buffered. The effect of this is that the following code,
printf("start ");
sleep(10);
printf("stop\n");
will wait 10 seconds and then print start stop[NL]. The first print was buffered because there was no new-line to flush the buffer. To get start to print, then sleep 10 seconds,you could either add a fflush call before the sleep call, or turn off buffering on stdout with setvbuf.
Stream output to a block device or disk file is, by default, fully buffered. This means that the buffer won't flush until either you overflow the buffer or do an fflush. This could be a problem with files, for example, if you want to monitor the output in real-time with tail -f. If you know that this monitoring may be done, you could switch the stream to line-buffering so that every time a new-line is printed, the buffer is flushed to the file. This would be at the cost of increased overhead as disk blocks are written several times as new-lines are printed. (Note: this overhead depends on how the file system is mounted. A fixed drive, mounted write-back cache, will have less overhead as the OS buffers writes to the disk, vs. a removable drive mounted write-though. In the latter case, the OS will try to do the partial writes to improve the chances of avoiding data loss if the drive is removed without dismounting.)

Understanding the need for fflush() and problems associated with it

Below is sample code for using fflush():
#include <string.h>
#include <stdio.h>
#include <conio.h>
#include <io.h>
void flush(FILE *stream);
int main(void)
{
FILE *stream;
char msg[] = "This is a test";
/* create a file */
stream = fopen("DUMMY.FIL", "w");
/* write some data to the file */
fwrite(msg, strlen(msg), 1, stream);
clrscr();
printf("Press any key to flush DUMMY.FIL:");
getch();
/* flush the data to DUMMY.FIL without closing it */
flush(stream);
printf("\nFile was flushed, Press any key to quit:");
getch();
return 0;
}
void flush(FILE *stream)
{
int duphandle;
/* flush the stream's internal buffer */
fflush(stream);
/* make a duplicate file handle */
duphandle = dup(fileno(stream));
/* close the duplicate handle to flush the DOS buffer */
close(duphandle);
}
All I know about fflush() is that it is a library function used to flush an output buffer. I want to know what is the basic purpose of using fflush(), and where can I use it. And mainly I am interested in knowing what problems can there be with using fflush().
It's a little hard to say what "can be problems with" (excessive?) use of fflush. All kinds of things can be, or become, problems, depending on your goals and approaches. Probably a better way to look at this is what the intent of fflush is.
The first thing to consider is that fflush is defined only on output streams. An output stream collects "things to write to a file" into a large(ish) buffer, and then writes that buffer to the file. The point of this collecting-up-and-writing-later is to improve speed/efficiency, in two ways:
On modern OSes, there's some penalty for crossing the user/kernel protection boundary (the system has to change some protection information in the CPU, etc). If you make a large number of OS-level write calls, you pay that penalty for each one. If you collect up, say, 8192 or so individual writes into one large buffer and then make one call, you remove most of that overhead.
On many modern OSes, each OS write call will try to optimize file performance in some way, e.g., by discovering that you've extended a short file to a longer one, and it would be good to move the disk block from point A on the disk to point B on the disk, so that the longer data can fit contiguously. (On older OSes, this is a separate "defragmentation" step you might run manually. You can think of this as the modern OS doing dynamic, instantaneous defragmentation.) If you were to write, say, 500 bytes, and then another 200, and then 700, and so on, it will do a lot of this work; but if you make one big call with, say, 8192 bytes, the OS can allocate a large block once, and put everything there and not have to re-defragment later.
So, the folks who provide your C library and its stdio stream implementation do whatever is appropriate on your OS to find a "reasonably optimal" block size, and to collect up all output into chunk of that size. (The numbers 4096, 8192, 16384, and 65536 often, today, tend to be good ones, but it really depends on the OS, and sometimes the underlying file system as well. Note that "bigger" is not always "better": streaming data in chunks of four gigabytes at a time will probably perform worse than doing it in chunks of 64 Kbytes, for instance.)
But this creates a problem. Suppose you're writing to a file, such as a log file with date-and-time stamps and messages, and your code is going to keep writing to that file later, but right now, it wants to suspend for a while and let a log-analyzer read the current contents of the log file. One option is to use fclose to close the log file, then fopen to open it again in order to append more data later. It's more efficient, though, to push any pending log messages to the underlying OS file, but keep the file open. That's what fflush does.
Buffering also creates another problem. Suppose your code has some bug, and it sometimes crashes but you're not sure if it's about to crash. And suppose you've written something and it's very important that this data get out to the underlying file system. You can call fflush to push the data through to the OS, before calling your potentially-bad code that might crash. (Sometimes this is good for debugging.)
Or, suppose you're on a Unix-like system, and have a fork system call. This call duplicates the entire user-space (makes a clone of the original process). The stdio buffers are in user space, so the clone has the same buffered-up-but-not-yet-written data that the original process had, at the time of the fork call. Here again, one way to solve the problem is to use fflush to push buffered data out just before doing the fork. If everything is out before the fork, there's nothing to duplicate; the fresh clone won't ever attempt to write the buffered-up data, as it no longer exists.
The more fflush-es you add, the more you're defeating the original idea of collecting up large chunks of data. That is, you are making a tradeoff: large chunks are more efficient, but are causing some other problem, so you make the decision: "be less efficient here, to solve a problem more important than mere efficiency". You call fflush.
Sometimes the problem is simply "debug the software". In that case, instead of repeatedly calling fflush, you can use functions like setbuf and setvbuf to alter the buffering behavior of a stdio stream. This is more convenient (fewer, or even no, code changes required—you can control the set-buffering call with a flag) than adding a lot of fflush calls, so that could be considered a "problem with use (or excessive-use) of fflush".
Well, #torek's answer is almost perfect, but there's one point which is not so accurate.
The first thing to consider is that fflush is defined only on output
streams.
According to man fflush, fflush can also be used in input streams:
For output streams, fflush() forces a write of all user-space
buffered data for the given output or update stream via the stream's
underlying write function. For
input streams, fflush() discards any buffered data that has been fetched from the underlying file, but has not been consumed by
the application. The open status of
the stream is unaffected.
So, when used in input, fflush just discard it.
Here is a demo to illustrate it:
#include<stdio.h>
#define MAXLINE 1024
int main(void) {
char buf[MAXLINE];
printf("prompt: ");
while (fgets(buf, MAXLINE, stdin) != NULL)
fflush(stdin);
if (fputs(buf, stdout) == EOF)
printf("output err");
exit(0);
}
fflush() empties the buffers related to the stream. if you e.g. let a user input some data in a very shot timespan (milliseconds) and write some stuff into a file, the writing and reading buffers may have some "reststuff" remaining in themselves. you call fflush() then to empty all the buffers and force standard outputs to be sure the next input you get is what the user pressed then.
reference: http://www.cplusplus.com/reference/cstdio/fflush/

speed comparison between fgetc/fputc and fread/fwrite in C

So(just for fun), i was just trying to write a C code to copy a file. I read around and it seems that all the functions to read from a stream call fgetc() (I hope this is this true?), so I used that function:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define FILEr "img1.png"
#define FILEw "img2.png"
main()
{
clock_t start,diff;
int msec;
FILE *fr,*fw;
fr=fopen(FILEr,"r");
fw=fopen(FILEw,"w");
start=clock();
while((!feof(fr)))
fputc(fgetc(fr),fw);
diff=clock()-start;
msec=diff*1000/CLOCKS_PER_SEC;
printf("Time taken %d seconds %d milliseconds\n", msec/1000, msec%1000);
fclose(fr);
fclose(fw);
}
This gave a run time of 140 ms for this file on a 2.10Ghz core2Duo T6500 Dell inspiron laptop.
However, when I try using fread/fwrite, I get decreasing run time as I keep increasing the number of bytes(ie. variable st in the following code) transferred for each call until it peaks at around 10ms! Here is the code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define FILEr "img1.png"
#define FILEw "img2.png"
main()
{
clock_t start,diff;
// number of bytes copied at each step
size_t st=10000;
int msec;
FILE *fr,*fw;
// placeholder for value that is read
char *x;
x=malloc(st);
fr=fopen(FILEr,"r");
fw=fopen(FILEw,"w");
start=clock();
while(!feof(fr))
{
fread(x,1,st,fr);
fwrite(x,1,st,fw);
}
diff=clock()-start;
msec=diff*1000/CLOCKS_PER_SEC;
printf("Time taken %d seconds %d milliseconds\n", msec/1000, msec%1000);
fclose(fr);
fclose(fw);
free(x);
}
Why this is happening? I.e if fread is actually multiple calls to fgetc then why the speed difference?
EDIT: specified that "increasing number of bytes" refers to the variable st in the second code
fread() is not calling fgetc() to read each byte.
It behaves as if calling fgetc() repeatedly, but it has direct access to the buffer that fgetc() reads from so it can directly copy a larger quantity of data.
You are forgetting about file buffering (inode, dentry and page caches).
Clear them before you run:
echo 3 > /proc/sys/vm/drop_caches
Backgrounder:
Benchmarking is an art. Refer to bonnie++, iozone and phoronix for proper filesystem benchmarking. As a characteristic, bonnie++ won't allow a benchmark with a written volume of less than 2x the available system memory.
Why?
(answer: buffering effects!)
Like sehe says its partly because buffering, but there is more to it and I'll explain why is that and at the same why fgetc() will give more latency.
fgetc() is called for every byte that is read from from file.
fread() is called for every n bytes of the local buffer for file data.
So for a 10MiB file:
fgetc() is called: 10 485 760 times
While fread with a 1KiB buffer the function called 10 240 times.
Lets say for simplicity that every function call takes 1ms:
fgetc would take 10 485 760 ms = 10485.76 seconds ~ 2,9127 hours
fread would take 10 240 ms = 10.24 seconds
On top of that the OS does reading and writing on usually the same device, I suppose your example does it on the same hard disk. The OS when reading your source file, move the hard disk heads over the spinning disk platters seeking the file and then reads 1 byte, put it on memory, then move again the reading/writing head over the hard disk spinning platters looking on the place that the OS and the hard disk controller agreed to locate the destination file and then writes 1 byte from memory. For the above example this happens over 10 million times for each file: totaling over 20 million times, using the buffered version this happens just a grand total of over 20 000 times.
Besides that the OS when reading the disk puts in memory a few more KiB of hard disk data for performance purposes, an this can speed up the program even when using the less efficient fgetc because the program read from the OS memory instead of reading directly from the hard disk. This is to what sehe's response refers.
Depending on your machine configuration/load/OS/etc your results from reading and writing can vary a lot, hence his recommendation to empty the disk caches to grasp better more meaningful results.
When source and destination files are on different hdd things are a lot faster. With SDDs I'm not really sure if reading/writing are absolutely exclusive of each other.
Summary: Every call to a function has certain overhead, reading from a HDD has other overheads and caches/buffers help to get things faster.
Other info
http://en.wikipedia.org/wiki/Disk_read-and-write_head
http://en.wikipedia.org/wiki/Hard_disk#Components
stdio functions will fill a read buffer, of size "BUFSIZ" as defined in stdio.h, and will only make one read(2) system call every time that buffer is drained. They will not do an individual read(2) system call for every byte consumed -- they read large chunks. BUFSIZ is typically something like 1024 or 4096.
You can also adjust that buffer's size, if you wish, to increase it -- see the man pages for setbuf/setvbuf/setbuffer on most systems -- though that is unlikely to make a huge difference in performance.
On the other hand, as you note, you can make a read(2) system call of arbitrary size by setting that size in the call, though you get diminishing returns with that at some point.
BTW, you might as well use open(2) and not fopen(3) if you are doing things this way. There is little point in fopen'ing a file you are only going to use for its file descriptor.

Probing for filesystem block size

I'm going to first admit that this is for a class project, since it will be pretty obvious. We are supposed to do reads to probe for the block size of the filesystem. My problem is that the time taken to do this appears to be linearly increasing, with no steps like I would expect.
I am timing the read like this:
double startTime = getticks();
read = fread(x, 1, toRead, fp);
double endTime = getticks();
where getticks uses rdtsc instructions. I am afraid there is caching/prefetching that is causing the reads to not take time during the fread. I tried creating a random file between each execution my program, but that is not alleviating my problem.
What is the best way to accurately measure the time taken for a read from disk? I am pretty sure my block size is 4096, but how can I get data to support that?
The usual way of determining filesystem block size is to ask the filesystem what its blocksize is.
#include <sys/statvfs.h>
#include <stdio.h>
int main() {
struct statvfs fs_stat;
statvfs(".", &fs_stat);
printf("%lu\n", fs_stat.f_bsize);
}
But if you really want, open(…,…|O_DIRECT) or posix_fadvise(…,…,…,POSIX_FADV_DONTNEED) will try to let you bypass the kernel's buffer cache (not guaranteed).
You may want to use the system calls (open(), read(), write(), ...)
directly to reduce the impact of the buffering done by the FILE* stuff.
Also, you may want to use synchronous I/O somehow.
One ways is opening the file with the O_SYNC flag set
(or O_DIRECT as per ephemient's reply).
Quoting the Linux open(2) manual page:
O_SYNC The file is opened for synchronous I/O. Any write(2)s on the
resulting file descriptor will block the calling process until
the data has been physically written to the underlying hardware.
But see NOTES below.
Another options would be mounting the filesystem with -o sync (see mount(8)) or setting the S attribute on the file using the chattr(1) command.

Resources