Difference between stream and direct I/O in C? - c

In C, I believe (correct me if I'm wrong) there are two different types of input/output functions, direct and stream, which result in binary and ASCII files respectively.
What is the difference between stream (ASCII) and direct (Binary) I/O in terms of retrieving (read/write) and printing data?

No, yes, sort of, maybe…
In C, … there are two different types of input/output functions, direct and stream, which result in binary and ASCII files respectively.
In Standard C, there are only file streams, FILE *. In POSIX C, there are what might be termed 'direct' file access functions, mainly using file descriptors instead of file streams. AFAIK, Windows also provides alternative I/O functions, mainly using handles instead of file streams. So "No" — Standard C has one type of I/O function; but POSIX (and Windows) provide alternatives.
In Standard C, you can create a binary files and text files using:
FILE *bfp = fopen("binary-file.bin", "wb");
FILE *tfp = fopen("regular-file.txt", "w");
On Windows (and maybe other systems for Windows compatibility), you can be explicit about opening a text file:
FILE *tcp = fopen("regular-file.txt", "wt");
So the standard distinguishes between text and binary files, but file streams can be used to access either type of file. Further, on Unix systems, there is no difference between a text file and a binary file; they will be treated the same. On Windows, a text file will have its CRLF (carriage return, line feed) line endings mapped to newline on input, and newlines mapped to CRLF line endings on output. That translation does not occur with binary files.
Note that there is also a concept 'direct I/O' on Linux, activated using the O_DIRECT flag, which is probably not what you're thinking of. It is a refinement of file descriptor I/O.
What is the difference between stream (ASCII) and direct (Binary) I/O in terms of retrieving (read/write) and printing data?
There are multiple issues.
First, the dichotomy between text files and binary files is separate from the dichotomy between stream I/O and direct I/O.
With stream I/O, the mapping of line endings from native (e.g. CRLF) to newline when processing text files compared with no such mapping when processing binary files.
With text I/O, it is assumed that there will be no null bytes, '\0' in the data. Such bytes in the middle of a line mess up text processing code that expects to read up to a null. With binary I/O, all 256 byte values are expected; code that breaks because of a null byte is broken.
Complicating this is the distinction between different code sets for encoding text files. If you have a single-byte code set, such as ISO 8859-15, then null bytes don't generally appear. If you have a multi-byte code set such as UTF-8, again, null bytes don't generally appear. However, if you have a wide character code set such as UTF-16 (whether big-endian or little-endian), then you will often get zero bytes in the body of the file — it is not intended to be read or written as a byte stream but rather as a stream of 16-bit units.
The major difference between stream I/O and direct I/O is that the stream library buffers data for both input and output, unless you override it with setvbuf(). That is, if you repeatedly read a single character in the user code (getchar() for example), the stream library first reads a chunk of data from the file and then doles out one character at a time from the chunk, only going back to the file for more data when the previous chunk has been delivered completely. By contrast, direct I/O reading a single byte at a time will make a system call for each byte. Granted, the kernel will buffer the I/O (it does that for the stream I/O too — so there are multiple layers of buffering here, which is part of what O_DIRECT I/O attempts to avoid whenever possible), but the overhead of a system call per byte is rather substantial.
Generally, you have more fine-grained control over access with file descriptors; there are operations you can do with file descriptors that are simply not feasible with streams because the stream interface functions simply don't cover the possibility. For example, setting FD_CLOEXEC or O_CLOEXEC on a file descriptor means that the file descriptor will be closed automatically by the system when the program executes another one — the stream library simply doesn't cover the concept, let alone provide control over it. The cost of gaining the fine-grained control is that you have to write more code — or, at least, different code that does what is handled for you by the stream library functions.

Streams are a portable way of reading and writing data. They provide a flexible and efficient means of I/O. A Stream is a file or a physical device (like monitor) which is manipulated with a pointer to the stream.
This is BUFFERED that is to say a fixed chunk is read from or written to a file via some temporary storage area (the buffer). But data written to a buffer does not appear in a file (or device) until the buffer is flushed or written out. (\n does this).
In Direct or low-level I/O-
This form of I/O is UNBUFFERED -- each read/write request results in accessing disk (or device) directly to fetch/put a specific number of bytes.
There are no formatting facilities -- we are dealing with bytes of information.
This means we are now using binary (and not text) files.

Related

What buffering type do streams in C use?

Recently I started to learn I/O in C and came across the idea of 'streams' and 'buffering'. To my understanding, a stream is a purposeful abstraction of an interface used for input and output on a device. Normally an opaque type is known as FILE is used to create 'streams', this type is typedef'd to a struct that contains data about the input/output channels of a specific device, enabling low-level functions in the OS to deal with input/output in an easier manner.
Furthermore 'buffering' i learned was a way that streams use to function in a more efficient manner, streams use a segment of memory known as a 'buffer' which is very fast to write too and extract data from as opposed to storing data straight in memory. Buffers are used when input is taken through a stream, it is then stored in a buffer, and then when the input process is done, it is extracted from the buffer and written to the file or standard output medium at hand.
This is all well and good but I have one question... Buffering can be changed and the programmer is able to change buffering types at any time, the types are:
No buffering
Line buffering
Full buffering
My question is, what is the default buffering that streams in C use?
You can assume what I mean by "streams" is the standard streams (stdin,stdout,stderr) and creating a stream with FILE type. What buffering do they all use?

Buffering and file I/O in C

Many years ago I noticed when reading a large binary file by BlockRead() in Delphi 7 that the speed is much lower when the file is being read byte by byte compared to when each time a chunk of, say, 16384 bytes, is read. It obviously meant that Delphi 7 didn't use an internal buffer (at least, by default) and each time the BlockRead() directly read from the disc.
What about fread() in C? Should the developer manage buffering herself/himself or the C library will take care of it? I know that text file I/O is buffered by default in C and, as far as I can remember, it is possible to change the size of the internal buffer.
UPDATE: I think that it is possible that Delphi 7 did use an internal buffer for an opened file but its default size was small.
According to the book C: In a Nutshell (2005) by T. Crawford and P. Prinz
When you open an ordinary file by calling fopen( ), the new stream is fully buffered. ... After you have opened a file, and before you perform the first input or output operation on it, you can change the buffering mode using the setbuf( ) or setvbuf( ) function.
It seems that this is about files in general, not only text files.
I will update this answer soon with the result of some tests.

What is exactly a stream in C language?

I can't understand the meaning of "stream" in C language. Is it an abstraction ( just a name describe many operations)? Is it an object (monitor, keyboard, file on hard drive) which a program exchange data with it ? Or it 's a memory space in the RAM holding temporarly the exchanged data ?.
Thinks for help.
A stream is an abstraction of an I/O channel. It can map to a physical device such as a terminal or tape drive or a printer, or it can map to a file in a file system, or a network socket, or something else completely. How that mapping is accomplished is not exposed to you, the programmer.
From the perspective of your code, a stream is simply a source (input stream) or sink (output stream) of characters (text stream) or bytes (binary stream). Streams are managed through FILE objects and the stdio routines.
As far as your code is concerned, all streams behave the same way, regardless of what they are mapped to. It's a uniform interface to operations that can have wildly different implementations.
Stream is just the sequence of data available over the time. It is distinct from the file for example because you cant set the position. Examples: data coming/going through the RS232, USB, Ethernet, IP newworks etc etc.
but my questions are what are exactly a stream on the machine level
Nothing special. Machine level does not know anything about the streams.
What is exactly a stream in C language?
Same - C language does not know anything about the streams.
In C when we use the term stream, we indicate any input source or output destination.
Some example may be:
stdin (standard input which is the keyboard by default)
stdout (standard output which by default is the screen)
stderr (standard error which is the screen by default)
Functions such as printf, scanf, gets, puts and getchar, are functions that have the keyboard as input stream and the screen as output stream.
But we can create streams to files to!
The stdio.h library supports two types of files, text files and binary files. Within a text file, the bytes represent characters, which makes it possible for a human to read what the file contains. By contrast, in a binary file, bytes do not necessarily represent characters. In summary, text files have two things that binary files do not: Text files are divided into lines, and each line ends with one or two special characters. The code obviously depends on the operating system. In addition, text files can contain the file terminator (END OF FILE).
Streams are specific to the running program as well. Let me explain this further.
When you run a program through the terminal (Unix-like/Windows) what essentially it does is:
The terminal forks into a child process and runs your specified program (./name_of_program).
All the printf statements are given to stdout of the parent process which forked. Same for, scanf statements but now to stdin of the parent process that forked.
The operating system handles the characteristics of the streams, i.e. how many bytes can be streamed to stdin/out at once. Generally in Unix it is 4096 bytes. (Hint: Use pipes to overcome this issue).
There are three types of streams in C or any Programming language, Buffered, Line-buffered and Unbuffered. (Hint: use delay() function between each printf() call to know what this mean)
Now, the read and write access to files is handled by other service of the OS which is file descriptor. They are positive integers used by OS to keep track of the opened files and ports (like Serial Port).

Is there a guaranteed and safe way to truncate a file from ANSI C FILE pointer?

I know ANSI C defines fopen, fwrite, fread, fclose to modify a file's content. However, when it comes to truncating a file, we have to turn to OS specific function, e.g, truncate() on Linux, _chsize_s_() on Windows. But before we can call those OS specific functions, we have to obtain the file-handle from FILE pointer, by calling fileno, also an non-ANSI-C one.
My question is: Is it reliable to continue using FILE* after truncating the file? I mean, ANSI C FILE layer has its own buffer and does not know the file is truncated from beneath. In case the buffered bytes is beyond the truncated point, will the buffered content be flushed to the file when doing fclose() ?
If no guarantee, what is the best practice of using file I/O functions accompanied with truncate operation when write a Windows-Linux portable program?
Similar question: When querying file size from a file-handle returned by fileno , is it the accurate size when I later call fclose() -- without further fwrite()?
[EDIT 2012-12-11]
According to Joshua's suggestion. I conclude that current possible best practice is: Set the stream to unbuffered mode by calling setbuf(stream, NULL); , then truncate() or _chsize_s() can work peacefully with the stream.
Anyhow, no official document seems to explicitly confirm this behavior, whether Microsoft CRT or GNU glibc.
The POSIX way....
ftruncate() is what you're looking for, and it's been in POSIX base specifications since 2001, so it should be in every modern POSIX-compatible system by now.
Note that ftruncate() operates on a POSIX file descriptor (despite its potentially misleading name), not a STDIO stream FILE handle. Note also that mixing operations on the STDIO stream and on the underlying OS calls which operate on the file descriptor for the open stream can confuse the internal runtime state of the STDIO library.
So, to use ftruncate() safely with STDIO it may be necessary to first flush any STDIO buffers (with fflush()) if your program may have already written to the stream in question. This will avoid STDIO trying to flush the otherwise unwritten buffer to the file after the truncation has been done.
You can then use fileno() on the STDIO stream's FILE handle to find the underlying file descriptor for the open STDIO stream, and you would then use that file descriptor with ftruncate(). You might consider putting the call to fileno() right in the parameter list for the ftruncate() call so that you don't keep the file descriptor around and accidentally use it yet other ways which might further confuse the internal state of STDIO. Perhaps like this (say to truncate a file to the current STDIO stream offset):
/*
* NOTE: fflush() is not needed here if there have been no calls to fseek() since
* the last fwrite(), assuming it extended the length of the stream --
* ftello() will account for any unwritten buffers
*/
if (ftruncate(fileno(stdout), ftello(stdout)) == -1) {
fprintf(stderr, "%s: ftruncate(stdout) failed: %s\n", argv[0], strerror(errno));
exit(1);
}
/* fseek() is not necessary here since we truncated at the current offset */
Note also that the POSIX definition of ftruncate() says "The value of the seek pointer shall not be modified by a call to ftruncate()", so this means you may also need to use use fseek() to set the STDIO layer (and thus indirectly the file descriptor) either to the new end of the file, or perhaps back to the beginning of the file, or somewhere still within the boundaries of the file, as desired. (Note that the fseek() should not be necessary if the truncation point is found using ftello().)
You should not have to make the STDIO stream unbuffered if you follow the procedure above, though of course doing so could be an alternative to using fflush() (but not fseek()).
Without POSIX....
If you need to stick to strict ISO Standard C, say C99, then you have no portable way to truncate a file to a given length other than zero (0) length. The latest draft of C11 that I have says this in Section 7.21.3 (paragraph 2):
Binary files are not truncated, except as defined in 7.21.5.3. Whether a write on a text stream causes the associated file to be truncated beyond that point is implementation-defined.
(and 7.21.5.3 describes the flags to fopen() which allow a file to be truncated to a length of zero)
The caveat about text files is there because on silly systems that have both text and binary files (as opposed to just plain POSIX-style content agnostic files) then it is often possible to write a value to the file which will be stored in the file at the position written and which will be treated as an EOF indicator when the file is next read.
Other types of systems may have different underlying file I/O interfaces that are not compatible with POSIX while still providing a compatible ISO C STDIO library. In theory if such a system offers something similar to fileno() and ftrunctate() then a similar procedure could be used with them as well, provided that one took the same care to avoid confusing the internal runtime state of the STDIO library.
With regard to querying file size....
You also asked whether the file size found by querying the file descriptor returned by fileno() would be an accurate representation of the file size after a successful call to fclose(), even without any further calls to fwrite().
The answer is: Don't do that!
As I mentioned above, the POSIX file descriptor for a file opened as a STDIO stream must be used very carefully if you don't want to confuse the internal runtime state of the STDIO library. We can add here that it is important not to confuse yourself with it either.
The most correct way to find the current size of a file opened as a STDIO stream is to seek to the end of it and then ask where the stream pointer is by using only STDIO functions.
Isn't an unbuffered write of zero bytes supposed to truncate the file at that point?
See this question for how to set unbuffered: Unbuffered I/O in ANSI C

Is there any ordinary reason to use open() instead of fopen()?

I'm doing a small project in C after quite a long time away from it. These happen to include some file handling. I noticed in various documentation that there are functions which return FILE * handles and others which return (small integer) descriptors. Both sets of functions offer the same basic services I need so it really does not matter I use.
But I'm curious about the collection wisdom: is it better to use fopen() and friends, or open() and friends?
Edit Since someone mentioned buffered vs unbuffered and accessing devices, I should add that one part of this small project will be writing a userspace filesystem driver under FUSE. So the file level access could as easily be on a device (e.g. a CDROM or a SCSI drive) as on a "file" (i.e. an image).
It is better to use open() if you are sticking to unix-like systems and you might like to:
Have more fine-grained control over unix permission bits on file creation.
Use the lower-level functions such as read/write/mmap as opposed to the C buffered stream I/O functions.
Use file descriptor (fd) based IO scheduling (poll, select, etc.) You can of course obtain an fd from a FILE * using fileno(), but care must be taken not to mix FILE * based stream functions with fd based functions.
Open any special device (not a regular file)
It is better to use fopen/fread/fwrite for maximum portability, as these are standard C functions, the functions I've mentioned above aren't.
The objection that "fopen" is portable and "open" isn't is bogus.
fopen is part of libc, open is a POSIX system call.
Each is as portable as the place they come from.
i/o to fopen'ed files is (you must assume it may be, and for practical purposes, it is) buffered by libc, file descriptors open()'ed are not buffered by libc (they may well be, and usually are buffered in the filesystem -- but not everything you open() is a file on a filesystem.
What's the point of fopen'ing, for example, a device node like /dev/sg0, say, or /dev/tty0... What are you going to do? You're going to do an ioctl on a FILE *? Good luck with that.
Maybe you want to open with some flags like O_DIRECT -- makes no sense with fopen().
fopen works at a higher level than open ....
fopen returns you a pointer to FILE stream which is similar to the stream abstraction that you read in C++
open returns you a file descriptor for the file opened ... It does not provide you a stream abstraction and you are responsible for handling the bits and bytes yourself ... This is at a lower level as compared to fopen
Stdio streams are buffered, while open() file descriptors are not. Depends on what you need. You can also create one from the other:
int fileno (FILE * stream) returns the file descriptor for a FILE *, FILE * fdopen(int fildes, const char * mode) creates a FILE * from a file descriptor.
Be careful when intermixing buffered and non-buffered IO, since you'll lose what's in your buffer when you don't flush it with fflush().
Yes. When you need a low-level handle.
On UNIX operating systems, you can generally exchange file handles and sockets.
Also, low-level handles make for better ABI compatibility than FILE pointers.
read() & write() use unbuffered I/O. (fd: integer file descriptor)
fread() & fwrite() use buffered I/O. (FILE* structure pointer)
Binary data written to a pipe with write() may not be able to read binary data with fread(), because of byte alignments, variable sizes, etc. Its a crap-shoot.
Most low-level device driver code uses unbuffered I/O calls.
Most application level I/O uses buffered.
Use of the FILE* and its associated functions
is OK on a machine-by-machine basis: but portability is lost
on other architectures in the reading and writing of binary data.
fwrite() is buffered I/O and can lead to unreliable results if
written for a 64 bit architecture and run on a 32bit; or (Windows/Linux).
Most OSs have compatibility macros within their own code to prevent this.
For low-level binary I/O portability read() and write() guarantee
the same binary reads and writes when compiled on differing architectures.
The basic thing is to pick one way or the other and be consistent about it,
throughout the binary suite.
<stdio.h> // mostly FILE* some fd input/output parameters for compatibility
// gives you a lot of helper functions -->
List of Functions
Function Description
───────────────────────────────────────────────────────────────────
clearerr check and reset stream status
fclose close a stream
fdopen stream open functions //( fd argument, returns FILE*) feof check and reset stream status
ferror check and reset stream status
fflush flush a stream
fgetc get next character or word from input stream
fgetpos reposition a stream
fgets get a line from a stream
fileno get file descriptor // (FILE* argument, returns fd)
fopen stream open functions
fprintf formatted output conversion
fpurge flush a stream
fputc output a character or word to a stream
fputs output a line to a stream
fread binary stream input/output
freopen stream open functions
fscanf input format conversion
fseek reposition a stream
fsetpos reposition a stream
ftell reposition a stream
fwrite binary stream input/output
getc get next character or word from input stream
getchar get next character or word from input stream
gets get a line from a stream
getw get next character or word from input stream
mktemp make temporary filename (unique)
perror system error messages
printf formatted output conversion
putc output a character or word to a stream
putchar output a character or word to a stream
puts output a line to a stream
putw output a character or word to a stream
remove remove directory entry
rewind reposition a stream
scanf input format conversion
setbuf stream buffering operations
setbuffer stream buffering operations
setlinebuf stream buffering operations
setvbuf stream buffering operations
sprintf formatted output conversion
sscanf input format conversion
strerror system error messages
sys_errlist system error messages
sys_nerr system error messages
tempnam temporary file routines
tmpfile temporary file routines
tmpnam temporary file routines
ungetc un-get character from input stream
vfprintf formatted output conversion
vfscanf input format conversion
vprintf formatted output conversion
vscanf input format conversion
vsprintf formatted output conversion
vsscanf input format conversion
So for basic use I would personally use the above without mixing idioms too much.
By contrast,
<unistd.h> write()
lseek()
close()
pipe()
<sys/types.h>
<sys/stat.h>
<fcntl.h> open()
creat()
fcntl()
all use file descriptors.
These provide fine-grained control over reading and writing bytes
(recommended for special devices and fifos (pipes) ).
So again, use what you need, but keep consistent in your idioms and interfaces.
If most of your code base uses one mode , use that too, unless there is
a real reason not to. Both sets of I/O library functions are extremely reliable
and used millions of times a day.
note-- If you are interfacing C I/O with another language,
(perl, python, java, c#, lua ...) check out what the developers of those languages
recommend before you write your C code and save yourself some trouble.
usually, you should favor using the standard library (fopen). However, there are occasions where you will need to use open directly.
One example that comes to mind is to work around a bug in an older version of solaris which made fopen fail after 256 files were open. This was because they erroniously used an unsigned char for the fd field in their struct FILE implementation instead of an int. But this was a very specific case.
fopen and its cousins are buffered. open, read, and write are not buffered. Your application may or may not care.
fprintf and scanf have a richer API that allows you to read and write formatted text files. read and write use fundamental arrays of bytes. Conversions and formatting must be hand crafted.
The difference between file descriptors and (FILE *) is really inconsequential.
Randy

Resources