What's the difference between read and fread? - c

I have a question about the SO question What is the difference between read() and fread()?
How do read() and fread() work? And what do unbuffered read and buffered read mean?
If fread() is implemented by calling read(), why do some people say fread() is faster than read() in the SO question C — fopen() vs open()?

fread cannot get faster than read provided (!) you use the same buffer size for read as fread does internally.
However every disk access comes with quite some overhead, so you improve performance if you minimise their number.
If you read small chunks of data then every read accesses the disk directly, thus you get slow, while in contrast fread profits from its buffer as long as there's yet data in – and on being consumed up the next large chunk is read into the buffer at once to again provide small chunks from on being called again.

What's the difference between read and fread?
fread is a standardized C programming language function that works with a FILE pointer.
read is a C function available on a POSIX compatible system that works with a file descriptor.
how do read() and fread() work?
fread calls some unknown system defined API to read from a file. To know how specific fread works you have to look at specific implmeentation of fread for your specific system - windows fread is very different from Linux fread. Here's implementation of fread as part of glibc library https://github.com/lattera/glibc/blob/master/libio/iofread.c#L30 .
read() calls read system call. https://man7.org/linux/man-pages/man2/syscalls.2.html
what do unbuffered read and buffered read mean?
fread reads data to some intermediate buffer, then copies from that buffer to the memory you passed as argument.
read makes kernel to copy straight to the buffer you passed as argument. There is no intermediate buffer.
why does some say fread() is faster than read() in this topic?
The assumption is that calling system calls is costly - what kernel does inside the system call and system call itself is costly.
Reading the data in a big like 4 kilobyte chunk into an intermediate buffer once and then reading from that buffer in small chunks within your program is faster than context switching to kernel every time to read a small chunks of data so that kernel will do repeatedly a small input/output operation to fetch the data.

Related

are fread and fwrite different in handling the internal buffer?

I keep on reading that fread() and fwrite() are buffered library calls. In case of fwrite(), I understood that once we write to the file, it won't be written to the hard disk, it will fill the internal buffer and once the buffer is full, it will call write() system call to write the data actually to the file.
But I am not able to understand how this buffering works in case of fread(). Does buffered in case of fread() mean, once we call fread(), it will read more data than we originally asked and that extra data will be stored in buffer (so that when 2nd fread() occurs, it can directly give it from buffer instead of going to hard disk)?
And I have following queries also.
If fread() works as I mention above, then will first fread() call read the data that is equal to the size of the internal buffer? If that is the case, if my fread() call ask for more bytes than internal buffer size, what will happen?
If fread() works as I mention above, that means at least one read() system call to kernel will happen for sure in case of fread(). But in case of fwrite(), if we only call fwrite() once during the program execution, we can't say for sure that write() system call be called. Is my understanding correct?
Will the internal buffer be maintained by OS?
Does fclose() flush the internal buffer?
There is buffering or caching at many different levels in a modern system. This might be typical:
C standard library
OS kernel
disk controller (esp. if using hardware RAID)
disk drive
When you use fread(), it may request 8 KB or so if you asked for less. This will be stored in user-space so there is no system call and context switch on the next sequential read.
The kernel may read ahead also; there are library functions to give it hints on how to do this for your particular application. The OS cache could be gigabytes in size since it uses main memory.
The disk controller may read ahead too, and could have a cache size up to hundreds of megabytes on smallish systems. It can't do as much in terms of read-ahead, because it doesn't know where the next logical block is for the current file (indeed it doesn't even know what file it is reading).
Finally, the disk drive itself has a cache, perhaps 16 MB or so. Like the controller, it doesn't know what file it is reading. For many years one disk block was 512 bytes, but it got a little larger (a few KB) recently with multi-terabyte disks.
When you call fclose(), it will probably deallocate the user-space buffer, but not the others.
Your understanding is correct. And any buffered fwrite data will be flushed when the FILE* is closed. The buffered I/O is mostly transparent for I/O on regular files.
But for terminals and other character devices you may care. Another instance where buffered I/O may be an issue is if you read from the file that one process is writing to from another process -- a common example is if a program writes text to a log file during operation, and the user runs a command like tail -f program.log to watch the content of the log file live. If the writing process has buffering enabled and it doesn't explicitly flush the log file, it will make it difficult to monitor the log file.

since 'fread' is buffred, is it necessary to fread data into memory and then use it?

I am using fopen/fread/fwrite/fseek on linux with gcc. is it necessary to allocate a memory buffer and use fread to read data sequentially into buffer before using the data?
When you use fread or the other file I/O functions in the C standard library, memory is buffered in several places.
Your application allocates a buffer which gets passed to fread. fread copies data into your buffer, and then you can do what you want with it. You are responsible for allocation/deallocation of this buffer.
The C library will usually create a buffer for every FILE* you have open. Data is read into this buffers in large chunks. This allows fread to satisfy many small requests without having to make a large number of system calls, which are expensive. This is what people mean when they say fread is buffered.
The kernel will also buffer files that are being read in the disk cache. This reduces the time needed for the read system call, since if data is already in memory, your program won't have to wait while the kernel fetches it from the disk. The kernel will hold on to recently read files, and it may read ahead for files which are being accessed sequentially.
The C library buffer is allocated automatically when you open a file and freed when you close the file. You don't have to manage it at all.
The kernel disk cache is stored in physical memory that isn't being used for anything else. Again, you don't have to manage this. The memory will be freed as soon as it's needed for something else.
You must pass a buffer (a buffer created by your code, malloced or local) to fread to pass the read data back to you. I don't know what do you exactly mean by saying "fread is buffered". Most 'C' library calls operate in this fashion. They will not return their internal storage (buffer or otherwise) to you and if they do, they will provide you a corresponding free/release functions.
Refer http://pubs.opengroup.org/onlinepubs/000095399/functions/fread.html It has a very basic example also.
With fread, yes, you have to allocate memory in your process and the system call will copy the data into your buffer.
In some specialised cases, you can handle data without copying it into userspace. See the sendfile system call, which copies data from one file descriptor to another directly. This can be used to transfer data from a file to a network socket without excessive copying.

How OS performs buffering for a file

I know that when you call fwrite or fprintf or rather any other function that writes to a file, the contents aren't immediately flushed to the disk, but buffered in the memory.
Firstly, where do the OS manage these buffers and how. Secondly, if you do the write to a file and later read in the content you wrote and assuming that the OS didn't flushed the contents between the time you wrote and read, how it knows that it has to return the read from the buffer? How does it handle this situation.
The reason I want to know this is that I'm interested in implementing my own buffering scheme in user-space, rather than kernel space as done by OS. That is, write to a file would be buffered in user-space and the actual write will only occur at a certain point. Consquently I also need to handle situations where read is called for the content that is still in the buffer. Is it possible to do all this in user-space.
Firstly, where do the OS manage these buffers and how
The functions fwrite and fprintf use stdio buffers which already are completely in userspace. The buffers are (likely) static arrays or perhaps malloced memory.
how it knows that it has to return the read from the buffer
It doesn't, so the changes aren't seen. Nothing actually happens to a file until the underlying system call (write) is called (and even then - read on).
Is it possible to do all this in user-space
No, it's not possible. The good news is that the kernel already has buffers so every write you do isn't atually translated into an actual write to the file. It is postponed and executed later. If in the meantime somebody tries to read from the file, the kernel is smart enough to serve him from the buffer.
Bits from TLPI:
When working with disk files, the read() and write() system calls
don’t directly ini- tiate disk access. Instead, they simply copy data
between a user-space buffer and a buffer in the kernel buffer cache.
When performing I/O on a disk file, a successful return from write()
doesn’t guarantee that the data has been transferred to disk, because
the kernel performs buffering of disk I/O in order to reduce disk
activity and expedite write() calls.
At some later point, the kernel writes (flushes) its buffer to the
disk.
If, in the interim, another process attempts to read these bytes of
the file, then the kernel automatically supplies the data from the
buffer cache, rather than from (the outdated contents of) the file.
So you may want to find out about sync and fsync.
Multiple levels of buffering are generally bad. The reason stdio buffers are useful is that they minimize the number of system calls performed. If a system call would be cheaper nobody would use stdio buffers anymore.

character reading in C

I am struggling to know the difference between these functions. Which one of them can be used if i want to read one character at a time.
fread()
read()
getc()
Depending on how you want to do it you can use any of those functions.
The easier to use would probably be fgetc().
fread() : read a block of data from a stream (documentation)
read() : posix implementation of fread() (documentation)
getc() : get a character from a stream (documentation). Please consider using fgetc() (doc)instead since it's kind of saffer.
fread() is a standard C function for reading blocks of binary data from a file.
read() is a POSIX function for doing the same.
getc() is a standard C function (a macro, actually) for reading a single character from a file - i.e., it's what you are looking for.
In addition to the other answers, also note that read is unbuffered method to read from a file. fread provides an internal buffer and reading is buffered. The buffer size is determined by you. Also each time you call read a system call occurs which reads the amount of bytes you told it to. Where as with fread it will read a chunk in the internal buffer and return you only the bytes you need. For each call on fread it will first check if it can provide you with more data from the buffer, if not it makes a system call (read) and gets a chunk more data and returns you only the portion you wanted.
Also read directly handles the file descriptor number, where fread needs the file to be opened as a FILE pointer.
The answer depends on what you mean by "one character at a time".
If you want to ensure that only one character is consumed from the underlying file descriptor (which may refer to a non-seekable object like a pipe, socket, or terminal device) then the only solution is to use read with a length of 1. If you use strace (or similar) to monitor a shell script using the shell command read, you'll see that it repeatedly calls read with a length of 1. Otherwise it would risk reading too many bytes (past the newline it's looking for) and having subsequent processes fail to see the data on the "next line".
On the other hand, if the only program that should be performing further reads is your program itself, fread or getc will work just fine. Note that getc should be a lot faster than fread if you're just reading a single byte.

Why the fwrite libc function is faster than the syscall write function?

After providing the same program which reads a random generated input file and echoes the same string it read to an output. The only difference is that on one side I'm providing the read and write methods from linux syscalls, and on the other side I'm using fread/fwrite.
Timing my application with an input of 10Mb in size and echoing it to /dev/null, and making sure the file is not cached, I've found that libc's fwrite is faster by a LARGE scale when using very small buffers (1 byte in case).
Here is my output from time, using fwrite:
real 0m0.948s
user 0m0.780s
sys 0m0.012s
And using the syscall write:
real 0m8.607s
user 0m0.972s
sys 0m7.624s
The only possibility that I can think of is that internally libc is already buffering my input... Unfortunately I couldn't find that much information around the web, so maybe the gurus here could help me out.
Timing my application with an input of
10Mb in size and echoing it to
/dev/null, and making sure the file in
not cached, I've found that libc's
frwite is faster by a LARGE scale when
using very small buffers (1 byte in
case).
fwrite works on streams, which are buffered. Therefore many small buffers will be faster because it won't run a costly system call until the buffer fills up (or you flush it or close the stream). On the other hand, small buffers being sent to write will run a costly system call for each buffer - that's where you're losing the speed. With a 1024 byte stream buffer, and writing 1 byte buffers, you're looking at 1024 write calls for each kilobyte, rather than 1024 fwrite calls turning into one write - see the difference?
For big buffers the difference will be small, because there will be less buffering, and therefore a more consistent number of system calls between fwrite and write.
In other words, fwrite(3) is just a library routine that collects up output into chunks, and then calls write(2). Now, write(2), is a system call which traps into the kernel. That's where the I/O actually happens. There is some overhead for simply calling into the kernel, and then there is the time it takes to actually write something. If you use large buffers, you will find that write(2) is faster because it eventually has to be called anyway, and if you are writing one or more times per fwrite then the fwrite buffering overhead is just that: more overhead.
If you want to read more about it, you can have a look at this document, which explains standard I/O streams.
write(2) is the fundamental kernel operation.
fwrite(3) is a library function that adds buffering on top of write(2).
For small (e.g., line-at-a-time) byte counts, fwrite(3) is faster, because of the overhead for just doing a kernel call.
For large (block I/O) byte counts, write(2) is faster, because it doesn't bother with buffering and you have to call the kernel in both cases.
If you look at the source to cp(1), you won't see any buffering.
Finally, there is one last consideration: ISO C vs Posix. The buffered library functions like fwrite are specified in ISO C whereas kernel calls like write are Posix. While many systems claim Posix-compatibility, especially when trying to qualify for government contracts, in practice it's specific to Unix-like systems. So, the buffered ops are more portable. As a result, a Linux cp will certainly use write but a C program that has to work cross-platform may have to use fwrite.
You can also disable buffering with setbuf() function. When the buffering is disabled, fwrite() will be as slow as write() if not slower.
More information on this subject can be found there: http://www.gnu.org/s/libc/manual/html_node/Controlling-Buffering.html

Resources