Are there features / semantics introduced, or removed, in C99 which would make a well defined program written in C89 either
invalid (i.e not compiling anymore, according to the C99 standard)
compiling, but having different semantics.
My findings so far, concerning plainly invalid programs:
implicit int (C89 §3.5.2)
implicit function declaration (C89 §3.3.2.2)
not returning from a function expecting a return value (C89 §3.6.6.4)
using new keywords as identifier (for example restrict, inline, etc)
hacks involving //, which are now treated as comments. However, nearly never encountered in production code.
Subtle changes, making the same code having different semantics:
Integer division has been made well defined, for example -3 / 2 now has to truncate towards zero (C99 §6.5.5/6), instead of being implementation defined (C89 §3.3.5/6)
strtod gained the ability to parse hexadecimal numbers in C99, by parsing 0x or 0X
What have I missed?
There are a lot of programs which would have been considered valid under C89, prior to the publication of C99, which some people insist were never valid. C89 includes a rule that requires that an object of any type may only be accessed using a pointer of that type, a related type, or a character type. Prior to the publication of C99, this rule was generally interpreted as applying only to "named" objects (variables of static or automatic duration which are accessed directly by name), and only in situations where the object in question didn't have its address taken immediately before it was used as a different pointer type. Such interpretation was motivated by a number of factors:
One of the stated goals of the Standard was to fit with what existing compilers and programs were doing, and while it would have been rare for existing programs to access discrete named variables using pointers of different types other than in cases where the variable's address was taken immediately before such use, many other usages of pointer type punning were quite common.
The rationale for the Standard includes as its sole example a function which receives a pointer of one primitive type to write a global variable of another primitive type in such a way that a compiler would have no particular reason to expect aliasing. Being able to keep global variables in registers is clearly a useful optimization, and the stated purpose of the rule is to allow such optimizations in cases where a compiler would have no reason to expect aliasing to occur. Outlawing constructs like like (int*)&foo=23; does nothing to aid such optimizations, since the fact that code is taking foo's address and dereferencing it should make it abundantly clear to any compiler that isn't being deliberately obtuse that the code is going to modify foo.
There are many kinds of code which require semantically the ability to use memory bits as various types, and nothing in the Standard indicate that the rules were intended to make programmers jump through hoops (e.g. by using memcpy) to achieve semantics that could have been easily obtained in the absence of the rules, especially considering that using memcpy would prevent the compiler from keeping global variables in registers across the pointer accesses (thus defeating the purpose for which the rules were written in the first place).
If structure types V and W have a common initial sequence, U is any union type containing both, and p is a V* which identifies the V within a U, then (W*)(U*)p may be used to access those common members, and will be equivalent to (W*)p. Unless a compiler could show that p couldn't possibly be a pointer to a member of some union containing W, it would be required to allow (W*)p to access the common members; it was more helpful to simply treat such common member access as being legitimate regardless of whether or where U might exist than to search for excuses to deny it.
Nothing in the C89 rules makes clear how the "type" of a region of allocated storage is defined, or how storage which holds things of one type that are no longer needed might be re-purposed to hold things of another.
Keeping track of registers allocated to named variables was easier than keeping track of registers allocated to other pointer exceptions, and code which was interested in minimizing the number of loads and stores via pointers would often copy things to named variables and work on them there.
C99 added "effective type" rules which are explicitly applicable to allocated storage. Some people insist those were merely "clarifications" of rules which already existed in C89, but for the above reasons I find that viewpoint untenable. It's fashionable to claim that the only reasons compilers didn't apply aliasing rules to unnamed objects are #5 and #6, but objections #1-#4 are equally significant (and continue to apply to C99 just as much as C89). Still, since C99 added the effective type rules, many constructs which would have been treated as legitimate by most common interpretations of the C89 rules are clearly forbidden.
As an element of contrast and comparison, the git/git codebase remains strictly conform to C89 and does not use C99 initializers, or features from newer C standard.
This is detailed in Git 2.23 (Q3 2019) in Git Coding Guidelines.
This answer illustrates post-C89 feature that might be compatible with C89.
See commit cc0c429 (16 Jul 2019) by Junio C Hamano (gitster).
(Merged by Junio C Hamano -- gitster -- in commit fe9dc6b, 25 Jul 2019)
CodingGuidelines: spell out post-C89 rules
Even though we have been sticking to C89, there are a few handy features we borrow from more recent C language in our codebase after trying them in weather balloons and saw that nobody screamed.
Spell them out.
While at it, extend the existing variable declaration rule a bit to
read better with the newly spelled out rule for the for loop.
The coding guidelines now include:
You should not use features from newer C standard, even if your compiler groks them.
There are a few exceptions to this guideline:
since early 2012 with e1327023ea (Git v1.7.9.2), we have been using an enum definition whose last element is followed by a comma.
This, like an array initializer that ends with a trailing comma, can be used to reduce the patch noise when adding a new identifer at the end.
since mid 2017 with cbc0f81d (Git v2.15.0-rc0), we have been using designated
initializers for struct (e.g. "struct t v = { .val = 'a' };")
There are certain C99 features that might be nice to use in our code base, but we've hesitated to do so in order to avoid breaking compatibility with older compilers.
But we don't actually know if people are even using pre-C99 compilers these days.
If this patch can survive a few releases without complaint, then we can feel more confident that designated initializers are widely supported by our user base.
It also is an indication that other C99 features may be supported, but not a guarantee (e.g., gcc had designated initializers before C99 existed).
since mid 2017 with 512f41cf (Git v2.15.0-rc0), we have been using designated initializers for array (e.g. "int array[10] = { [5] = 2 }").
This is another test balloon to see if we get complaints from people
whose compilers do not support designated initializer for arrays.
These used to be forbidden, but we have not heard any breakage report, and they are assumed to be safe.
Variables have to be declared at the beginning of the block, before the first statement (i.e. -Wdeclaration-after-statement).
Declaring a variable in the for loop "for (int i = 0; i < 10; i++)" is still not allowed in this codebase.
Related
First, I apologize if this appears to be a duplicate, but I couldn't find exactly this question elsewhere
I was reading through N1570, specifically §6.5¶7, which reads:
An object shall have its stored value accessed only by an lvalue expression that has one of the following types:
— a type compatible with the effective type of the object,
— a qualified version of a type compatible with the effective type of the object,
— a type that is the signed or unsigned type corresponding to the effective type of the object,
— a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object,
— an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or
— a character type.
This reminded me of a common idiom I had seen in (BSD-like) socket programming, especially in the connect() call. Though the second argument to connect() is a struct sockaddr *, I have often seen passed to it a struct sockaddr_in *, which appears to work because they share a similar initial element. My question is:
To which contingency detailed in the above rule does this situation apply and why, or is it now undefined behavior that's an artifact of previous standard(s)?
This behavior is not defined by the C standard.
The behavior is defined by The Single Unix Specification and/or other documents relating to the software you are using, albeit in part implicitly.
The phrasing that “An object shall have its stored value accessed only by…” is misleading. The C standard cannot compel you to do anything; you are not obligated to obey its “shall” requirements. In terms of the C standard, the only consequence of not obeying its requirements is that the C standard does not define the behavior. This does not prohibit other documents from defining the behavior.
In the netinet/in.h documentation, we see “The sockaddr_in structure is used to store addresses for the Internet protocol family. Values of this type must be cast to struct sockaddr for use with the socket interfaces defined in this document.” So the documentation tells us not only that we should, but that we must, convert a sockaddr_in to a sockaddr. The fact that we must do so implies that the software supports it and that it will work. (Note that the phrasing is imprecise here; we do not actually cast a sockaddr_in to a sockaddr but actually convert the pointer, causing the sockaddr_in object in memory to be treated as a sockaddr.)
Thus there is an implied promise that the operating system, libraries, and developer tools provided for a Unix implementation support this.
This is an extension to the C language: Where behavior is not defined by the C standard, other documents may provide definitions and allow you to write software that cannot be written using the C standard alone. Behavior that the C standard says is undefined is not behavior that is prohibited but rather is an empty space that may be filled in by other specifications.
The rules about common initial sequences goes back to 1974. The earliest rules about "strict aliasing" only go back to 1989. The intention of the latter was not that they trump everything else, but merely that compilers be allowed to perform optimizations that their customers would find useful without being branded non-conforming. The Standard makes clear that in situations where one part of the Standard and/or an implementation's documentation would describe the behavior of some action but another part of the Standard would characterize it as Undefined Behavior, implementations may opt to give priority to the first, and the Rationale makes clear that the authors thought "the marketplace" would be better placed than the Committee to determine when implementations should do so.
Under a sufficiently pedantic reading of the N1570 6.5p7 constraints, almost all programs violate them, but in ways that won't matter unless an implementation is being sufficiently obtuse. The Standard makes no attempt to list all the situations in which an object of one type may be accessed by an lvalue of another, but rather those where a compiler must allow for an object of one type to be accessed by a seemingly unrelated lvalue of another. Given the code sequence:
int x;
int *p[10];
p[2] = &someStruct.intMember;
...
*p[2] = 23;
x = someStruct.intMember;
In the absence of the rules in 6.5p7, unless a compiler kept track of where p[2] came from, it would have no reason to recognize that the read of someStruct.member might be targeting storage that was just written using *p[2]. On the other hand, given the code:
int x;
int *p[10];
...
someStruct.intMember = 12;
p[2] = &someStruct.intMember;
x = *p[2];
Here, there is no rule that would actually allow the storage associated with a structure to be accessed by an lvalue of that member type, but unless a compiler is being deliberately blind, it would be able to see that after the first assignment to someStruct.intMember, the address of that member is being taken, and should either:
Account for all actions that will ever be done with the resulting pointer, if it is able to do so, or
Refrain from assuming that the structure's storage will not be accessed between the previous and succeeding actions using the structure's type.
I don't think it ever occurred to the people who were writing the rules that would later be renumbered as N1570 6.5p7 that they would be construed so as to disallow common patterns that exploited the Common Initial Sequence rule. As noted, most programs violate the constraints of 6.5p7, but do so in ways that would be processed predictably by any compiler that isn't being obtuse; those using the Common Initial Sequence guarantees would have fallen into that category. Since the authors of the Standard recognized the possibility of a "conforming" compiler that was only capable of meaningfully processing one contrived and useless program, the fact that an obtuse compiler could abuse the "aliasing rules" wasn't seen as a defect.
Is it OK do do something like this?
struct MyStruct {
int x;
const char y; // notice the const
unsigned short z;
};
struct MyStruct AStruct;
fread(&MyStruct, sizeof (MyStruct), 1,
SomeFileThatWasDefinedEarlierButIsntIncludedInThisCodeSnippet);
I am changing the constant struct member by writing to the entire struct from a file. How is that supposed to be handled? Is this undefined behavior, to write to a non-constant struct, if one or more of the struct members is constant? If so, what is the accepted practice to handle constant struct members?
It's undefined behavior.
The C11 draft n1570 says:
6.7.3 Type qualifiers
...
...
If an attempt is made to modify an object defined with a const-qualified type through use of an lvalue with non-const-qualified type, the behavior is undefined.
My interpretation of this is: To be compliant with the standard, you are only allowed to set the value of the const member during object creation (aka initialization) like:
struct MyStruct AStruct = {1, 'a', 2}; // Fine
Doing
AStruct.y = 'b'; // Error
should give a compiler error.
You can trick the compiler with code like:
memcpy(&AStruct, &AnotherStruct, sizeof AStruct);
It will probably work fine on most systems but it's still undefined behavior according to the C11 standard.
Also see memcpy with destination pointer to const data
How are constant struct members handled in C?
Read the C11 standard n1570 and its §6.7.3 related to the const qualifier.
If so, what is the accepted practice to handle constant struct members?
It depends if you care more about strict conformance to the C standard, or about practical implementations. See this draft report (work in progress in June 2020) discussing these concerns. Such considerations depend on the development efforts allocated on your project, and on portability of your software (to other platforms).
It is likely that you won't spend the same efforts on the embedded software of a Covid respirator (or inside some ICBM) and on the web server (like lighttpd or a library such as libonion or some FastCGI application) inside a cheap consumer appliance or running on some cheap rented Linux VPS.
Consider also using static analysis tools such as Frama-C or the Clang static analyzer on your code.
Regarding undefined behavior, be sure to read this blog.
See also this answer to a related question.
I am changing the constant struct member by writing to the entire struct from a file.
Then endianness issues and file system issues are important. Consider perhaps using libraries related to JSON, to YAML, perhaps mixed to sqlite or PostGreSQL or TokyoCabinet (and the source code of all these open source libraries, or from the Linux kernel, could be inspirational).
The Standard is a bit sloppy in its definition and use of the term "object". For a statement like "All X must be Y" or "No X may be Z" to be meaningful, the definition of X must have criteria that are not only satisfied by all X, but that would unambiguously exclude all objects that aren't required to be Y or are allowed to be Z.
The definition of "object", however, is simply "region of data storage in the execution environment, the contents of which can represent values". Such a definition, however, fails to make clear whether every possible range of consecutive addresses is always an "object", or when various possible ranges of addresses are subject to the constraints that apply to "objects" and when they are not.
In order for the Standard to unambiguously classify a corner case as defined or undefined, the Committee would have to reach a consensus as to whether it should be defined or undefined. If the Committee members fundamentally disagree about whether some cases should be defined or undefined, the only way to pass a rule by consensus will be if the rule is written ambiguously in a way that allows people with contradictory views about what should be defined to each think the rule supports their viewpoint. While I don't think the Committee members explicitly wanted to make their rules ambiguous, I don't think the Committee could have been consensus for rules that weren't.
Given that situation, many actions, including updating structures that have constant members, most likely falls in the realm of actions which the Standard doesn't require implementations to process meaningfully, but which the authors of the Standard would have expected that implementations would process meaningfully anyhow.
A few years ago, before standardization of C, it was allowed to use struct selectors on addresses. For example, the following code was allowed and frequently used.
#define PTR 0xAA000
struct { int integ; };
func() {
int i;
i = PTR->integ; /* here, c is set to the first int at PTR */
return c;
}
Maybe it wasn't very neat, but I like it. In my opinion, the power and the versatility of this language relies also on its lack of constraints. Nowadays, compilers just dump an error. I'd like to know if it is possible to remove this restraint in the GNU C compiler.
PS: similar code was used on the UNIX kernel by the inventors of C. (in V6, some dummy structures have been declared in param.h)
'A few years ago' is actually a very, very long time ago. AFAICR, the C in 7th Edition UNIX™ (1979, a decade before the C89 standard was defined) didn't support that notation any more (but see below).
The code shown in the question only worked when all structure members of all structures shared the same name space. That meant that structure.integ or pointer->integ always referred to an int at the start of a structure because there was only one possible structure member integ across the entire program.
Note that in 'modern' C (1978 onwards), you cannot reference the structure type; there's neither a structure tag nor a typedef for it — the type is useless. The original code also references an undefined variable c.
To make it work, you'd need something like:
#define PTR 0xAA000
struct integ { int integ; };
int func(void)
{
struct integ *ptr = (struct integ *)PTR;
return ptr->integ;
}
C for 7th Edition UNIX
I suggested that the C with 7th Edition UNIX supported separate namespaces for separate structure types. However, the C Reference Manual published with the UNIX Programmer's Manual Vol 2 mentions in §8.5 Structures:
The names of structure members and structure tags may be the same as ordinary variables, since a distinction can
be made by context. However, names of tags and members must be distinct. The same member name can appear in
different structures only if the two members are of the same type and if their origin with respect to their structure is
the same; thus separate structures can share a common initial segment.
However, that same manual also mentions the notations (see also What does =+ mean in C):
§7.14.2 lvalue =+ expression
§7.14.3 lvalue =- expression
§7.14.4 lvalue =* expression
§7.14.5 lvalue =/ expression
§7.14.6 lvalue =% expression
§7.14.7 lvalue =>> expression
§7.14.8 lvalue =<< expression
§7.14.9 lvalue =& expression
§7.14.10 lvalue =^ expression
§7.14.11 lvalue = | expression
The behavior of an expression of the form ‘‘E1 =op E2’’ may be inferred by taking it as equivalent to
‘‘E1 = E1 op E2’’; however, E1 is evaluated only once. Moreover, expressions like ‘‘i =+ p’’ in which a pointer is
added to an integer, are forbidden.
AFAICR, that was not supported in the first C compilers I used (1983 — I'm ancient, but not quite that ancient); only the modern += notations were allowed. In other words, I don't think the C described by that reference manual was fully current when the product was released. (I've not checked my 1st Edition of K&R — does anyone have one on hand to check?) You can find the UNIX 7th Edition manuals online at http://cm.bell-labs.com/7thEdMan/.
By giving the structure a type name and adjusting your macro slightly you can achieve the same effect in your code:
typedef struct { int integ; } PTR_t;
#define PTR ((PTR_t*)0xAA000)
I'd like to know if it is possible to remove this restraint in the GNU C compiler.
I'm reasonably sure the answer is no -- that is, unless you rewrite gcc to support the older version of the language.
The gcc manual documents the -traditional command-line option:
'-traditional' '-traditional-cpp'
Formerly, these options caused GCC to attempt to emulate a
pre-standard C compiler. They are now only supported with the
`-E' switch. The preprocessor continues to support a pre-standard
mode. See the GNU CPP manual for details.
This implies that modern gcc (the quote is from the 4.8.0 manual) no longer supports pre-ANSI C.
The particular feature you're referring to isn't just pre-ANSI, it's very pre-ANSI. The ANSI standard was published in 1989. The first edition of K&R was published in 1978, and as I recall the language it described didn't support the feature you're looking for. The initial release of gcc was in 1987, so it's very likely that no version of gcc has ever supported that feature.
Furthermore, enabling such a feature would break existing code which may depend on the ability to use the same member name in different structures. (Traces of the old rules survive in the standard C library, where for example the members of type struct tm all have names starting with tm_; in modern C that would not be necessary.)
You might be able to find sources for an ancient C compiler that works the way you want. The late Dennis Ritchie's home page would be a good starting point for that. It's not at all obvious that you'd be able to get such a compiler working on any modern system without a great deal of work. And the result would be a compiler that doesn't support a number of newer features of C that you might find useful, such as the long, signed, and unsigned keywords, the ability to pass structures by value, function prototypes, and diagnostics for attempts to mix pointers and integers.
C is better now than it was then. There are a few dangerous things that are slightly more difficult than they were, but I'm not aware that any actual expressive power has been lost.
Why is it sensible for a language to allow implicit declarations of functions and typeless variables? I get that C is old, but allowing to omit declarations and default to int() (or int in case of variables) doesn't seem so sane to me, even back then.
So, why was it originally introduced? Was it ever really useful? Is it actually (still) used?
Note: I realise that modern compilers give you warnings (depending on which flags you pass them), and you can suppress this feature. That's not the question!
Example:
int main() {
static bar = 7; // defaults to "int bar"
return foo(bar); // defaults to a "int foo()"
}
int foo(int i) {
return i;
}
See Dennis Ritchie's "The Development of the C Language": http://web.archive.org/web/20080902003601/http://cm.bell-labs.com/who/dmr/chist.html
For instance,
In contrast to the pervasive syntax variation that occurred during the
creation of B, the core semantic content of BCPL—its type structure
and expression evaluation rules—remained intact. Both languages are
typeless, or rather have a single data type, the 'word', or 'cell', a
fixed-length bit pattern. Memory in these languages consists of a
linear array of such cells, and the meaning of the contents of a cell
depends on the operation applied. The + operator, for example, simply
adds its operands using the machine's integer add instruction, and the
other arithmetic operations are equally unconscious of the actual
meaning of their operands. Because memory is a linear array, it is
possible to interpret the value in a cell as an index in this array,
and BCPL supplies an operator for this purpose. In the original
language it was spelled rv, and later !, while B uses the unary *.
Thus, if p is a cell containing the index of (or address of, or
pointer to) another cell, *p refers to the contents of the pointed-to
cell, either as a value in an expression or as the target of an
assignment.
This typelessness persisted in C until the authors started porting it to machines with different word lengths:
The language changes during this period, especially around 1977, were largely focused on considerations of portability and type safety,
in an effort to cope with the problems we foresaw and observed in
moving a considerable body of code to the new Interdata platform. C at
that time still manifested strong signs of its typeless origins.
Pointers, for example, were barely distinguished from integral memory
indices in early language manuals or extant code; the similarity of
the arithmetic properties of character pointers and unsigned integers
made it hard to resist the temptation to identify them. The unsigned
types were added to make unsigned arithmetic available without
confusing it with pointer manipulation. Similarly, the early language
condoned assignments between integers and pointers, but this practice
began to be discouraged; a notation for type conversions (called
`casts' from the example of Algol 68) was invented to specify type
conversions more explicitly. Beguiled by the example of PL/I, early C
did not tie structure pointers firmly to the structures they pointed
to, and permitted programmers to write pointer->member almost without
regard to the type of pointer; such an expression was taken
uncritically as a reference to a region of memory designated by the
pointer, while the member name specified only an offset and a type.
Programming languages evolve as programming practices change. In modern C and the modern programming environment, where many programmers have never written assembly language, the notion that ints and pointers are interchangeable may seem nearly unfathomable and unjustifiable.
It's the usual story — hysterical raisins (aka 'historical reasons').
In the beginning, the big computers that C ran on (DEC PDP-11) had 64 KiB for data and code (later 64 KiB for each). There was a limit to how complex you could make the compiler and still have it run. Indeed, there was scepticism that you could write an O/S using a high-level language such as C, rather than needing to use assembler. So, there were size constraints. Also, we are talking a long time ago, in the early to mid 1970s. Computing in general was not as mature a discipline as it is now (and compilers specifically were much less well understood). Also, the languages from which C was derived (B and BCPL) were typeless. All these were factors.
The language has evolved since then (thank goodness). As has been extensively noted in comments and down-voted answers, in strict C99, implicit int for variables and implicit function declarations have both been made obsolete. However, most compilers still recognize the old syntax and permit its use, with more or less warnings, to retain backwards compatibility, so that old source code continues to compile and run as it always did. C89 largely standardized the language as it was, warts (gets()) and all. This was necessary to make the C89 standard acceptable.
There is still old code around using the old notations — I spend quite a lot of time working on an ancient code base (circa 1982 for the oldest parts) which still hasn't been fully converted to prototypes everywhere (and that annoys me intensely, but there's only so much one person can do on a code base with multiple millions of lines of code). Very little of it still has 'implicit int' for variables; there are too many places where functions are not declared before use, and a few places where the return type of a function is still implicitly int. If you don't have to work with such messes, be grateful to those who have gone before you.
Probably the best explanation for "why" comes from here:
Two ideas are most characteristic of C among languages of its class: the relationship between arrays and pointers, and the way in which declaration syntax mimics expression syntax. They are also among its most frequently criticized features, and often serve as stumbling blocks to the beginner. In both cases, historical accidents or mistakes have exacerbated their difficulty. The most important of these has been the tolerance of C compilers to errors in type. As should be clear from the history above, C evolved from typeless languages. It did not suddenly appear to its earliest users and developers as an entirely new language with its own rules; instead we continually had to adapt existing programs as the language developed, and make allowance for an existing body of code. (Later, the ANSI X3J11 committee standardizing C would face the same problem.)
Systems programming languages don't necessarily need types; you're mucking around with bytes and words, not floats and ints and structs and strings. The type system was grafted onto it in bits and pieces, rather than being part of the language from the very beginning. As C has moved from being primarily a systems programming language to a general-purpose programming language, it has become more rigorous in how it handles types. But, even though paradigms come and go, legacy code is forever. There's still a lot of code out there that relies on that implicit int, and the standards committee is reluctant to break anything that's working. That's why it took almost 30 years to get rid of it.
A long, long time ago, back in the K&R, pre-ANSI days, functions looked quite different than they do today.
add_numbers(x, y)
{
return x + y;
}
int ansi_add_numbers(int x, int y); // modern, ANSI C
When you call a function like add_numbers, there is an important difference in the calling conventions: all types are "promoted" when the function is called. So if you do this:
// no prototype for add_numbers
short x = 3;
short y = 5;
short z = add_numbers(x, y);
What happens is x is promoted to int, y is promoted to int, and the return type is assumed to be int by default. Likewise, if you pass a float it is promoted to double. These rules ensured that prototypes weren't necessary, as long as you got the right return type, and as long as you passed the right number and type of arguments.
Note that the syntax for prototypes is different:
// K&R style function
// number of parameters is UNKNOWN, but fixed
// return type is known (int is default)
add_numbers();
// ANSI style function
// number of parameters is known, types are fixed
// return type is known
int ansi_add_numbers(int x, int y);
A common practice back in the old days was to avoid header files for the most part, and just stick the prototypes directly in your code:
void *malloc();
char *buf = malloc(1024);
if (!buf) abort();
Header files are accepted as a necessary evil in C these days, but just as modern C derivatives (Java, C#, etc.) have gotten rid of header files, old-timers didn't really like using header files either.
Type safety
From what I understand about the old old days of pre-C, there wasn't always much of a static typing system. Everything was an int, including pointers. In this old language, the only point of function prototypes would be to catch arity errors.
So if we hypothesize that functions were added to the language first, and then a static type system was added later, this theory explains why prototypes are optional. This theory also explains why arrays decay to pointers when used as function arguments -- since in this proto-C, arrays were nothing more than pointers which get automatically initialized to point to some space on the stack. For example, something like the following may have been possible:
function()
{
auto x[7];
x += 1;
}
Citations
The Development of the C Language, Dennis M. Ritchie
On typelessness:
Both languages [B and BCPL] are typeless, or rather have a single data type, the 'word,' or 'cell,' a fixed-length bit pattern.
On the equivalence of integers and pointers:
Thus, if p is a cell containing the index of (or address of, or pointer to) another cell, *p refers to the contents of the pointed-to cell, either as a value in an expression or as the target of an assignment.
Evidence for the theory that prototypes were omitted due to size constraints:
During development, he continually struggled against memory limitations: each language addition inflated the compiler so it could barely fit, but each rewrite taking advantage of the feature reduced its size.
Some food for thought. (It's not an answer; we actually know the answer — it's permitted for backward compatibility.)
And people should look at COBOL code base or f66 libraries before saying why it's not cleaned up in 30 years or so!
gcc with its default does not spit out any warnings.
With -Wall and gcc -std=c99 do spit out the correct thing
main.c:2: warning: type defaults to ‘int’ in declaration of ‘bar’
main.c:3: warning: implicit declaration of function ‘foo’
The lint functionality built into modern gcc is showing its color.
Interestingly the modern clone of lint, the secure lint — I mean splint — gives only one warning by default.
main.c:3:10: Unrecognized identifier: foo
Identifier used in code has not been declared. (Use -unrecog to inhibit
warning)
The llvm C compiler clang which also has a static analyser built into it like gcc, spits out the two warnings by default.
main.c:2:10: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
static bar = 7; // defaults to "int bar"
~~~~~~ ^
main.c:3:10: warning: implicit declaration of function 'foo' is invalid in C99
[-Wimplicit-function-declaration]
return foo(bar); // defaults to a "int foo()"
^
People used to think we don't need backward compatibility for 80's stuff. All the code must be cleaned up or replaced. But it turns out it's not the case. A lot of production code stays in prehistoric non-standard times.
EDIT:
I didn't look through other answers before posting mine. I may have misunderstood the intention of the poster. But the thing is there was a time when you hand compiled your code, and use toggle to put the binary pattern in memory. They didn't need a "type system". Nor does the PDP machine in front of which Richie and Thompson posed like this :
Don't look at the beard, look at the "toggles", which I heard were used to bootstrap the machine.
And also look how they used to boot UNIX in this paper. It's from the Unix 7th edition manual.
http://wolfram.schneider.org/bsd/7thEdManVol2/setup/setup.html
The point of the matter is they didn't need so much software layer managing a machine with KB sized memory. Knuth's MIX has 4000 words. You don't need all these types to program a MIX computer. You can happily compare a integer with pointer in a machine like this.
I thought why they did this is quite self-evident. So I focused on how much is left to be cleaned up.
The Python documentation claims that the following does not work on "some platforms or compilers":
int foo(int); // Defined in another translation unit.
struct X { int (*fptr)(int); } x = {&foo};
Specifically, the Python docs say:
We’d like to just assign this to the tp_new slot, but we can’t, for
portability sake, On some platforms or compilers, we can’t statically
initialize a structure member with a function defined in another C
module, so, instead, we’ll assign the tp_new slot in the module
initialization function just before calling PyType_Ready(). --http://docs.python.org/extending/newtypes.html
Is the above standard C89 and/or C99? What compilers specifically cannot handle the above?
That kind of initialization has been permitted since at least C90.
From C90 6.5.7 "Initialization"
All the expressions in an initializer for an object that has static storage duration or in an initializer list for an object that has aggregate or union type shall be constant expressions.
And 6.4 "Constant expressions":
An address constant is a pointer to an lvalue designating an object of static storage duration, or to a function designator; it shall be created explicitly, using the unary & operator...
But it's certainly possible that some implementations might have trouble with the construct - I'd guess that wouldn't be true for modern implementations.
According to n1570 6.6 paragraph 9, the address of a function is an address constant, according to 6.7.9 this means that it can be used to initialize global variables. I am almost certain this is also valid C89.
However,
On sane platforms, the value of a function pointer (or any pointer, other than NULL) is only known at runtime. This means that the initialization of your structure can't take place until runtime. This doesn't always apply to executables but it almost always applies to shared objects such as Python extensions. I recommend reading Ulrich Drepper's essay on the subject (link).
I am not aware of which platforms this is broken on, but if the Python developers mention it, it's almost certainly because one of them got bitten by it. If you're really curious, try looking at an old Python extension and seeing if there's an appropriate message in the commit logs.
Edit: It looks like most Python modules just do the normal thing and initialize type structures statically, e.g., static type obj = { function_ptr ... };. For example, look at the mmap module, which is loaded dynamically.
The example is definitively conforming to C99, and AFAIR also C89.
If some particular (oldish) compiler has a problem with it, I don't think that the proposed solution is the way to go. Don't impose dynamic initialization to platforms that behave well. Instead, special case the weirdos that need special treatment. And try to phase them out as quickly as you may.