Initialize a `FILE *` variable in C? - c

I've got several oldish code bases that initialize variables (to be able to redirect input/output at will) like
FILE *usrin = stdin, *usrout = stdout;
However, gcc (8.1.1, glibc 2.27.9000 on Fedora rawhide) balks at this (initializer is not a compile time constant). Rummaging in /usr/include/stdio.h I see:
extern FILE *stdin;
/* C89/C99 say they're macros. Make them happy */
#define stdin stdin
First, it makes no sense to me that you can't initialize variables this (rather natural) way for such use. Sure, you can do it in later code, but it is a nuisance.
Second, why is the macro expansion not a constant?
Third, what is the rationale for having them be macros?

First, it makes no sense to me that you can't initialize variables this (rather natural) way for such use.
Second, why is the macro expansion not a constant?
stdin, stdout, and stderr are pointers which are initialized during C library startup, possibly as the result of a memory allocation. Their values aren't known at compile time -- depending on how your C library works, they might not even be constants. (For instance, if they're pointers to statically allocated structures, their values will be affected by ASLR.)
Third, what is the rationale for having them be macros?
It guarantees that #ifdef stdin will be true. This might have been added for compatibility with some very old programs which needed to handle systems which lacked support for stdio.

Classically, the values for stdin, stdout and stderr were variations on the theme of:
#define stdin (&__iob[0])
#define stdout (&__iob[1])
#define stderr (&__iob[2])
These are address constants and can be used in initializers for variables at file scope:
static FILE *def_out = stdout;
However, the C standard does not guarantee that the values are address constants that can be used like that C11 §7.21 Input/output <stdio.h.>:
stderr, stdin, stdout
which are expressions of type ''pointer to FILE'' that point to the FILE objects associated, respectively, with the standard error, input, and output streams.
Sometime a decade or more ago, the GNU C Library changed their definitions so that you could no longer use stdin, stdout or stderr as initializers for variables at file scope, or static variables with function scope (though you can use them to initialize automatic variables in a function). So, old code that had worked for ages on many systems stopped working on Linux.
The macro expansion of stdin etc is either a simple identity expansion (#define stdin stdin) or equivalent (on macOS, #define stdout __stdoutp). These are variables, not address constants, so you can't copy the value of the variable in the file scope initializer. It is a nuisance, but the standard doesn't say they're address constants, so it is legitimate.
They're required to be macros because they always were macros, so it retains that much backwards compatibility with the dawn of the standard I/O library (circa 1978, long before there was a standard C library per se).

Related

Why is assert a macro and not a function?

My lecturer has asked me that in class, and I was wondering why is it a macro instead of a function?
The simple explanation would be that the standard requires assert to be a macro, if we look at the draft C99 standard(as far as I can tell the sections are the same in draft C11 standard as well) section 7.2 Diagnostics paragraph 2 says:
The assert macro shall be implemented as a macro, not as an actual
function. If the macro definition is suppressed in order to access an
actual function, the behavior is undefined.
Why does it require this, the rationale given in Rationale for International Standard—Programming Languages—C is:
It can be difficult or impossible to make assert a true function, so it is restricted to macro
form.
which is not very informative, but we can see from other requirements why. Going back to section 7.2 paragraph 1 says:
[...]If NDEBUG is defined as a macro name at the point in the source file
where is included, the assert macro is defined simply as
#define assert(ignore) ((void)0)
The assert macro is redefined according to the current state of NDEBUG
each time that is included.
This is important since it allows us an easy way to turn off assertions in release mode where you may want to take the cost of potentially expensive checks.
and the second important requirement is that it is required to use the macros __FILE__, __LINE__ and __func__, which is covered in section 7.2.1.1 The assert macro which says:
[...] the assert macro writes information about the particular call
that failed [...] the latter are respectively the values of the
preprocessing macros __FILE_ _ and __LINE_ _ and of the identifier
__func_ _) on the standard error stream in an implementation-defined format.165) It then calls the abort function.
where footnote 165 says:
The message written might be of the form:
Assertion failed: expression, function abc, file xyz, line nnn.
Having it as a macro allows the macros __FILE__ etc... to be evaluated in the proper location and as Joachim points out being a macro allows it to insert the original expression in the message it generates.
The draft C++ standard requires that the contents of the cassert header are the same as the assert.h header from Standrd C library:
The contents are the same as the Standard C library header .
See also: ISO C 7.2.
Why (void)0?
Why use (void)0 as opposed to some other expression that does nothing? We can come up with a few reasons, first this is how the assert synopsis looks in section 7.2.1.1:
void assert(scalar expression);
and it says (emphasis mine):
The assert macro puts diagnostic tests into programs; it expands to a void expression.
the expression (void)0 is consistent with the need to end up with a void expression.
Assuming we did not have that requirement, other possible expressions could have undesirable effects such as allowing uses of assert in release mode that would not be allowed in debug mode for example using plain 0 would allow us to use assert in an assignment and when used correctly would likely generate an expression result unused warning. As for using a compound statement as a comment suggests, we can see from C multi-line macro: do/while(0) vs scope block that they an have undesirable effects in some cases.
It allows capturing the file (through __FILE__) and line number (through __LINE__)
It allows the assert to be substituted for a valid expression which does nothing (i.e. ((void)0)) when building in release mode
This macro is disabled if, at the moment of including , a macro with the name NDEBUG has already been defined. This allows for a coder to include as many assert calls as needed in a source code while debugging the program and then disable all of them for the production version by simply including a line like:
#define NDEBUG
at the beginning of its code, before the inclusion of <assert.h>.
Therefore, this macro is designed to capture programming errors, not user or run-time errors, since it is generally disabled after a program exits its debugging phase.
Making it as function will increase some function calls and you can not control all such asserts in release mode.
If you use function then _FILE__, __LINE__ and __func__ will give the value of that assert function's code. Not that calling line or calling function's line.
Some assertions can be expensive to call. You've just written a high performance matrix inversion routine, and you add a sanity check
assert(is_identity(matrix * inverse))
to the end. Well, your matrices are pretty big, and if assert is a function, it would take a lot of time to do the computation before passing it into assert. Time you really don't want to spend if you're not doing debugging.
Or maybe the assertion is relatively cheap, but it's contained in a very short function that will get called in an inner loop. Or other similar circumstances.
By making assert a macro instead, you can eliminate the calculation entirely when assertions are turned off.
Why is assert a macro and not a function?
Because it should compiled in DEBUG mode and should not compiled in RELEASE mode.

How to get file size in ANSI C without fseek and ftell?

While looking for ways to find the size of a file given a FILE*, I came across this article advising against it. Instead, it seems to encourage using file descriptors and fstat.
However I was under the impression that fstat, open and file descriptors in general are not as portable (After a bit of searching, I've found something to this effect).
Is there a way to get the size of a file in ANSI C while keeping in line with the warnings in the article?
In standard C, the fseek/ftell dance is pretty much the only game in town. Anything else you'd do depends at least in some way on the specific environment your program runs in. Unfortunately said dance also has its problems as described in the articles you've linked.
I guess you could always read everything out of the file until EOF and keep track along the way - with fread() for example.
The article claims fseek(stream, 0, SEEK_END) is undefined behaviour by citing an out-of-context footnote.
The footnote appears in text dealing with wide-oriented streams, which are streams that the first operation that is performed on them is an operation on wide-characters.
This undefined behaviour stems from the combination of two paragraphs. First §7.19.2/5 says that:
— Binary wide-oriented streams have the file-positioning restrictions ascribed to both text and binary streams.
And the restrictions for file-positioning with text streams (§7.19.9.2/4) are:
For a text stream, either offset shall be zero, or offset shall be a value returned by an earlier successful call to the ftell function on a stream associated with the same file and whence shall be SEEK_SET.
This makes fseek(stream, 0, SEEK_END) undefined behaviour for wide-oriented streams. There is no such rule like §7.19.2/5 for byte-oriented streams.
Furthermore, when the standard says:
A binary stream need not meaningfully support fseek calls with a whence value of SEEK_END.
It doesn't mean it's undefined behaviour to do so. But if the stream supports it, it's ok.
Apparently this exists to allow binary files can have coarse size granularity, i.e. for the size to be a number of disk sectors rather than a number of bytes, and as such allows for an unspecified number of zeros to magically appear at the end of binary files. SEEK_END cannot be meaningfully supported in this case. Other examples include pipes or infinite files like /dev/zero. However, the C standard provides no way to distinguish between such cases, so you're stuck with system-dependent calls if you want to consider that.
Use fstat - requires the file descriptor - can get that from fileno from the FILE* - Hence the size is in your grasp along with other details.
i.e.
fstat(fileno(filePointer), &buf);
Where filePointer is the FILE *
and
buf is
struct stat {
dev_t st_dev; /* ID of device containing file */
ino_t st_ino; /* inode number */
mode_t st_mode; /* protection */
nlink_t st_nlink; /* number of hard links */
uid_t st_uid; /* user ID of owner */
gid_t st_gid; /* group ID of owner */
dev_t st_rdev; /* device ID (if special file) */
off_t st_size; /* total size, in bytes */
blksize_t st_blksize; /* blocksize for file system I/O */
blkcnt_t st_blocks; /* number of 512B blocks allocated */
time_t st_atime; /* time of last access */
time_t st_mtime; /* time of last modification */
time_t st_ctime; /* time of last status change */
};
The executive summary is that you must use fseek/ftell because there is no alternative (even the implementation specific ones) that is better.
The underlying issue is that the "size" of a file in bytes is not always the same as the length of the data in the file and that, in some circumstances, the length of the data is not available.
A POSIX example is what happens when you write data to a device; the operating system only knows the size of the device. Once the data has been written and the (FILE*) closed there is no record of the length of the data written. If the device is opened for read the fseek/ftell approach will either fail or give you the size of the whole device.
When the ANSI-C committee was sitting at the end of the 1980's a number of operating systems the members remembered simply did not store the length of the data in a file; rather they stored the disk blocks of the file and assumed that something in the data terminated it. The 'text' stream represents this. Opening a 'binary' stream on those files shows not only the magic terminator byte, but also any bytes beyond it that were never written but happen to be in the same disk block.
Consequently the C-90 standard was written so that it is valid to use the fseek trick; the result is a conformant program, but the result may not be what you expect. The behavior of that program is not 'undefined' in the C-90 definition and it is not 'implementation-defined' (because on UN*X it varies with the file). Neither is it 'invalid'. Rather you get a number you can't completely rely on or, maybe, depending on the parameters to fseek, -1 and an errno.
In practice if the trick succeeds you get a number that includes at least all the data, and this is probably what you want, and if the trick fails it is almost certainly someone else's fault.
John Bowler
different OS's provide different apis for this. For example in windows we have:
GetFileAttributes()
In MAC we have:
[[[NSFileManager defaultManager] attributesOfItemAtPath:someFilePath error:nil] fileSize];
But raw method is only by fread and fseek only:
How can I get a file's size in C?
You can't always avoid writing platform-specific code, especially when you have to deal with things that are a function of the platform. File sizes are a function of the file system, so as a rule I'd use the native filesystem API to get that information over the fseek/ftell dance. I'd create my own generic wrapper around it, so as to not pollute application logic with platform-specific details and make the code easier to port.
The article has a little problem of logic.
It (correctly) identifies that a certain usage of C functions has behavior which is not defined by ISO C. But then, to avoid this undefined behavior, the article proposes a solution: replace that usage with platform-specific functions. Unfortunately, the use of platform-specific functions is also undefined according to ISO C. Therefore, the advice does not solve the problem of undefined behavior.
The quote in my copy of the 1999 standard confirms that the alleged behavior is indeed undefined:
A binary stream need no meaningfully support fseek calls with a whence value of SEEK_END. [ISO 9899:1999 7.19.9.2 paragraph 3]
But undefined behavior does not mean "bad behavior"; it is simply behavior for which the ISO C standard gives no definition. Not all undefined behaviors are the same.
Some undefined behaviors are areas in the language where meaningful extensions can be provided. The platform fills the gap by defining a behavior.
Providing a working fseek which can seek from SEEK_END is an example of an extension in place of undefined behavior. It is possible to confirm whether or not a given platform supports fseek from SEEK_END, and if this is provisioned, then it is fine to use it.
Providing a separate function like lseek is also an extension in place of undefined behavior (the undefined behavior of calling a function which is not in ISO C and not defined in the C program). It is fine to use that, if available.
Note that those platforms which have functions like the POSIX lseek will also likely have an ISO C fseek which works from SEEK_END. Also note that on platforms where fseek on a binary file cannot seek from SEEK_END, the likely reason is that this is impossible to do (no API can be provided to do it and that is why the C library function fseek is not able to support it).
So, if fseek does provide the desired behavior on the given platform, then nothing has to be done to the program; it is a waste of effort to change it to use that platform's special function. On the other hand, if fseek does not provide the behavior, then likely nothing does, anyway.
Note that even including a nonstandard header which is not in the program is undefined behavior. (By omission of the definition of behavior.) For instance if the following appears in a C program:
#include <unistd.h>
the behavior is not defined after that. [See References below.] The behavior of the preprocessing directive #include is defined, of course. But this creates two possibilities: either the header <unistd.h> does not exist, in which case a diagnostic is required. Or the header does exist. But in that case, the contents are not known (as far as ISO C is concerned; no such header is documented for the Library). In this case, the include directive brings in an unknown chunk of code, incorporating it into the translation unit. It is impossible to define the behavior of an unknown chunk of code.
#include <platform-specific-header.h> is one of the escape hatches in the language for doing anything whatsoever on a given platform.
In point form:
Undefined behavior is not inherently "bad" and not inherently a security flaw (though of course it can be! E.g. buffer overruns linked to the undefined behaviors in the area of pointer arithmetic and dereferencing.)
Replacing one undefined behavior with another, only for the purpose of avoiding undefined behavior, is pointless.
Undefined behavior is just a special term used in ISO C to denote things that are outside of the scope of ISO C's definition. It does not mean "not defined by anyone in the world" and doesn't imply something is defective.
Relying on some undefined behaviors is necessary for making most real-world, useful programs, because many extensions are provided through undefined behavior, including platform-specific headers and functions.
Undefined behavior can be supplanted by definitions of behavior from outside of ISO C. For instance the POSIX.1 (IEEE 1003.1) series of standards defines the behavior of including <unistd.h>. An undefined ISO C program can be a well defined POSIX C program.
Some problems cannot be solved in C without relying on some kind of undefined behavior. An example of this is a program that wants to seek so many bytes backwards from the end of a file.
References:
Dan Pop in comp.std.c, Dec. 2002: http://groups.google.com/group/comp.std.c/msg/534ab15a7bc4e27e?dmode=source
Chris Torek, comp.std.c, on the subject of nonstandard functions being undefined behavior, Feb. 2002: http://groups.google.com/group/comp.lang.c/msg/2fddb081336543f1?dmode=source
Chris Engebretson, comp.lang.c, April 1997: http://groups.google.com/group/comp.lang.c/msg/3a3812dbcf31de24?dmode=source
Ben Pfaff, comp.lang.c, Dec 1998 [Jestful answer citing undefinedness of the inclusion of nonstandard headers]: http://groups.google.com/group/comp.lang.c/msg/73b26e6892a1ba4f?dmode=source
Lawrence Kirby, comp.lang.c, Sep 1998 [Explains effects of nonstandard headers]: http://groups.google.com/group/comp.lang.c/msg/c85a519fc63bd388?dmode=source
Christian Bau, comp.lang.c, Sep 1997 [Explains how the undefined behavior of #include <pascal.h> can bring in a pascal keyword for linkage.] http://groups.google.com/group/comp.lang.c/msg/e2762cfa9888d5c6?dmode=source

where is stdin defined in c standard library?

I found this line in stdio.h :
extern struct _IO_FILE *stdin;
Based on this 'extern' keyword, i assume this is just a declaration. I wonder where is stdin defined and initialized?
It's defined in the source code of your C library. You typically only need the headers for compilation, but you can find the source code for many open-source standard libraries (like glibc).
In glibc, it's defined in libio/stdio.c as like this:
_IO_FILE *stdin = (FILE *) &_IO_2_1_stdin_;
Which is in turn defined using a macro in libio/stdfiles.c like this:
DEF_STDFILE(_IO_2_1_stdin_, 0, 0, _IO_NO_WRITES);
The definition of the DEF_STDFILE macro varies depending on a few things, but it more or less sets up an appropriate FILE struct using the file descriptor 0 (which is standard input on Unix).
The definition may (and of course does) vary depending on your C library, and certainly by platform. If you want, you can continue the goose chase around the various parts of your standard library's I/O component.
The C standard explicitly states that stdin is a macro defined in stdio.h. It is not allowed to be defined anywhere else.
C11 7.21.1
"The header <stdio.h> defines several macros, ..." /--/
"The macros are..." /--/
stderr
stdin
stdout
which are expressions of type ‘‘pointer to FILE’’ that point to the FILE objects
associated, respectively, with the standard error, input, and output streams.
This macro can of course point at implementation details that are implemented elsewhere, such as in a "stdio.c" or whatever the compiler library chose to put it.
I believe it's defined in stdio.c which is compiled into in libc (on gnu based systems)
The definition will be implementation dependent, as will where you find it. For me, on OSX 10.6, it's defined in stdio.h, as a FILE (a struct).
stdin is of the type _IO_FILE, a struct which is clearly defined somewhere, probably in stdio.h. If not, check in the header files included in stdio.h for a definition of _IO_FILE.
In the standard library code, where else? In a Linux machine around here it's in libc.a:stdio.o, found using nm -o /usr/lib/libc.a | grep stdin | grep D. If you want to read some code, see the GNU C Library.

Is main() a pre-defined function in C? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
main() in C, C++, Java, C#
I'm new to programming in general, and C in particular. Every example I've looked at has a "main" function - is this pre-defined in some way, such that the name takes on a special meaning to the compiler or runtime... or is it merely a common idiom among C programmers (like using "foo" and "bar" for arbitrary variable names).
No, you need to define main in your program. Since it's called from the run-time, however, the interface your main must provide is pre-defined (must return an int, must take either zero arguments or two, the first an int, and the second a char ** or, equivalently, char *[]). The C and C++ standards do specify that a function with external linkage named main acts as the entry point for a program1.
At least as the term is normally used, a predefined function would be one such as sin or printf that's in the standard library so you can use it without having to write it yourself.
1If you want to get technical, that's only true for a "hosted" implementation -- i.e., the kind most of us use most of the time that produces programs that run on an operating system. A "free-standing" implementation (one produces program that run directly on the "bare metal" with no operating system under it) is free to define the entry point(s) as it sees fit. A free-standing implementation can also leave out most of the normal run-time library, providing only a handful of headers (e.g., <stddef.h>) and virtually no standard library functions.
Yes, main is a predefined function in the general sense of the the word "defined". In other words, the C language standard specifies that the function called at program startup shall be named main. It is not merely a convention used by programmers as we have with foo or bar.
The fine print: from the perspective of the technical meaning of the word "defined" in the context of C programming, no the main function is not "predefined" -- the compiler or C library do not supply a predefined function named main. You need to define your own implementation of the main function (and, obviously, you should name it main).
There is typically a piece of code that normal C programs are linked to which does:
extern int main(int argc, char * argv[], char * envp[]);
FILE * stdin;
FILE * stdout;
FILE * stderr;
/* ** setup argv ** */
...
/* ** setup envp ** */
...
/* ** setup stdio ** */
stdin = fdopen(0, "r");
stdout = fdopen(1, "w");
stderr = fdopen(2, "w");
int rc;
rc = main(argc, argv, envp); // envp may not be present here with some systems
exit(rc);
Note that this code is C, not C++, so it expects main to be a C function.
Also note that my code does no error checking and leaves out a lot of other system dependent stuff that probably happens. It also ignores some things that happen with C++, objective C, and various other languages that may be linked to it (notably constructor and destructor calling, and possibly having main be within a C++ try/catch block).
Anyway, this code knows that main is a function and takes arguments. If your main looks like:
int main(void) {
Then it still gets passed arguments, but they are ignored.
This code specially linked so that it is called when the program starts up.
You are completely free to write your own version of this code on many architectures, but it relies on intimate knowledge of how the operating system starts a new program as well as the C (and C++ and possibly Objective C) run time. It is likely to require some assembly programming and or use of compiler extensions to properly build.
The C compiler driver (the command you usually call when you call the compiler) passes the object file containing all of this (often called crt0.0, for C Run Time ...) along with the rest of your program to the linker, unless told not to.
When building an operating system kernel or an embedded program you often do not want to use the standard crt*.o file. You also may not want to use it if you are building a normal application in another programming language, or have some other non-standard requirements.
No, or you couldn't define one.
It's not predefined, but its meaning as an entry point, if it is present, is defined.

Why does C's "fopen" take a "const char *" as its second argument?

It has always struck me as strange that the C function "fopen" takes a "const char *" as the second argument. I would think it would be easier to both read your code and implement the library's code if there were bit masks defined in stdio.h, like "IO_READ" and such, so you could do things like:
FILE* myFile = fopen("file.txt", IO_READ | IO_WRITE);
Is there a programmatic reason for the way it actually is, or is it just historic? (i.e. "That's just the way it is.")
I believe that one of the advantages of the character string instead of a simple bit-mask is that it allows for platform-specific extensions which are not bit-settings. Purely hypothetically:
FILE *fp = fopen("/dev/something-weird", "r+,bs=4096");
For this gizmo, the open() call needs to be told the block size, and different calls can use radically different sizes, etc. Granted, I/O has been organized pretty well now (such was not the case originally — devices were enormously diverse and the access mechanisms far from unified), so it seldom seems to be necessary. But the string-valued open mode argument allows for that extensibility far better.
On IBM's mainframe MVS o/s, the fopen() function does indeed take extra arguments along the general lines described here — as noted by Andrew Henle (thank you!). The manual page includes the example call (slightly reformatted):
FILE *fp = fopen("myfile2.dat", "rb+, lrecl=80, blksize=240, recfm=fb, type=record");
The underlying open() has to be augmented by the ioctl() (I/O control) call or fcntl() (file control) or functions hiding them to achieve similar effects.
One word : legacy. Unfortunately we have to live with it.
Just speculation : Maybe at the time a "const char *" seemed more flexible solution, because it is not limited in any way. A bit mask could only have 32 different values. Looks like a YAGNI to me now.
More speculation : Dudes were lazy and writing "rb" requires less typing than MASK_THIS | MASK_THAT :)
Dennis Ritchie (in 1993) wrote an article about the history of C, and how it evolved gradually from B. Some of the design decisions were motivated by avoiding source changes to existing code written in B or embryonic versions of C.
In particular, Lesk wrote a 'portable
I/O package' [Lesk 72] that was later
reworked to become the C `standard
I/O' routines
The C preprocessor wasn't introduced until 1972/3, so Lesk's I/O package was written without it! (In very early not-yet-C, pointers fit in integers on the platforms being used, and it was totally normal to assign an implicit-int return value to a pointer.)
Many other changes occurred around 1972-3, but the most important was the introduction of the preprocessor, partly at the urging of Alan Snyder [Snyder 74]
Without #include and #define, an expression like IO_READ | IO_WRITE wasn't an option.
The options in 1972 for what fopen calls could look in typical source without CPP are:
FILE *fp = fopen("file.txt", 1); // magic constant integer literals
FILE *fp = fopen("file.txt", 'r'); // character literals
FILE *fp = fopen("file.txt", "r"); // string literals
Magic integer literals are obviously horrible, so unfortunately the obviously most efficient option (which Unix later adopted for open(2)) was ruled out by lack of a preprocessor.
A character literal is obviously not extensible; presumably that was obvious to API designers even back then. But it would have been sufficient (and more efficient) for early implementations of fopen: They only supported single-character strings, checking for *mode being r, w, or a. (See #Keith Thompson's answer.) Apparently r+ for read+write (without truncating) came later. (See fopen(3) for the modern version.)
C did have a character data type (added to B 1971 as one of the first steps in producing embryonic C, so it was still new in 1972. Original B didn't have char, having been written for machines that pack multiple characters into a word, so char() was a function that indexed a string! See Ritchie's history article.)
Using a single-byte string is effectively passing a char by const-reference, with all the extra overhead of memory accesses because library functions can't inline. (And primitive compilers probably weren't inlining anything, even trival functions (unlike fopen) in the same compilation unit where it would shrink total code size to inline them; Modern style tiny helper functions rely on modern compilers to inline them.)
PS: Steve Jessop's answer with the same quote inspired me to write this.
Possibly related: strcpy() return value. strcpy was probably written pretty early, too.
I must say that I am grateful for it - I know to type "r" instead of IO_OPEN_FLAG_R or was it IOFLAG_R or SYSFLAGS_OPEN_RMODE or whatever
I'd speculate that it's one or more of the following (unfortunately, I was unable to quickly find any kind of supporting references, so this'll probably remain speculation):
Kernighan or Ritchie (or whoever came up with the interface for fopen()) just happened to like the idea of specifying the mode using a string instead of a bitmap
They may have wanted the interface to be similar to yet noticeably different from the Unix open() system call interface, so it would be at once familiar yet not mistakenly compile with constants defined for Unix instead of by the C library
For example, let's say that the mythical C standard fopen() that took a bitmapped mode parameter used the identifier OPENMODE_READONLY to specify that the file what today is specified by the mode string "r". Now, if someone made the following call on a program compiled on a Unix platform (and that the header that defines O_RDONLY has been included):
fopen( "myfile", O_RDONLY);
There would be no compiler error, but unless OPENMODE_READONLY and O_RDONLY were defined to be the same bit you'd get unexpected behavior. Of course it would make sense for the C standard names to be defined the same as the Unix names, but maybe they wanted to preclude requiring this kind of coupling.
Then again, this might not have crossed their minds at all...
The earliest reference to fopen that I've found is in the first edition of Kernighan & Ritchie's "The C Programming Language" (K&R1), published in 1978.
It shows a sample implementation of fopen, which is presumably a simplified version of the code in the C standard library implementation of the time. Here's an abbreviated version of the code from the book:
FILE *fopen(name, mode)
register char *name, *mode;
{
/* ... */
if (*mode != 'r' && *mode != 'w' && *mode != 'a') {
fprintf(stderr, "illegal mode %s opening %s\n",
mode, name);
exit(1);
}
/* ... */
}
Looking at the code, the mode was expected to be a 1-character string (no "rb", no distinction between text and binary). If you passed a longer string, any characters past the first were silently ignored. If you passed an invalid mode, the function would print an error message and terminate your program rather than returning a null pointer (I'm guessing the actual library version didn't do that). The book emphasized simple code over error checking.
It's hard to be certain, especially given that the book doesn't spend a lot of time explaining the mode parameter, but it looks like it was defined as a string just for convenience. A single character would have worked as well, but a string at least makes future expansion possible (something that the book doesn't mention).
Dennis Ritchie has this to say, from http://cm.bell-labs.com/cm/cs/who/dmr/chist.html
In particular, Lesk wrote a 'portable
I/O package' [Lesk 72] that was later
reworked to become the C `standard
I/O' routines
So I say ask Mike Lesk, post the result here as an answer to your own question, and earn stacks of points for it. Although you might want to make the question sound a bit less like criticism ;-)
The reason is simple: to allow the modes be extended by the C implementation as it sees fit. An argument of type int would not do that The C99 Rationale V5-10 7.19.5.3 The fopen function says e.g. that
Other specifications for files, such as record length and block size, are not specified in the Standard due to their widely varying characteristics in different operating environments.
Changes to file access modes and buffer sizes may be specified using the setvbuf function
(see §7.19.5.6).
An implementation may choose to allow additional file specifications as part of the mode string argument. For instance,
file1 = fopen(file1name, "wb,reclen=80");
might be a reasonable extension on a system that provides record-oriented binary files and allows
a programmer to specify record length.
Similar text exists in the C89 Rationale 4.9.5.3
Naturally if |ed enum flags were used then these kinds of extensions would not be possible.
One example of fopen implementation using these parameters would be on z/OS. An example there has the following excerpt:
/* The following call opens:
the file myfile2.dat,
a binary file for reading and writing,
whose record length is 80 bytes,
and maximum length of a physical block is 240 bytes,
fixed-length, blocked record format
for sequential record I/O.
*/
if ( (stream = fopen("myfile2.dat", "rb+, lrecl=80,\
blksize=240, recfm=fb, type=record")) == NULL )
printf("Could not open data file for read update\n");
Now, imagine if you had to squeeze all this information into one argument of type int!!
As Tuomas Pelkonen says, it's legacy.
Personally, I wonder if some misguided saps conceived of it as being better due to fewer characters typed? In the olden days programmers' time was valued more highly than it is today, since it was less accessible and compilers weren't as great and all that.
This is just speculation, but I can see why some people would favor saving a few characters here and there (note the lack of verbosity in any of the standard library function names... I present string.h's "strstr" and "strchr" as probably the best examples of unnecessary brevity).

Resources