technical legality of incompatible pointer assignments - c

The C11 standard ISO/IEC 9899:2011 (E) states the following constraints for simple assignments in §6.5.16.1/1:
One of the following shall hold:
the left operand has atomic, qualified, or unqualified arithmetic type, and the right has
arithmetic type;
the left operand has an atomic, qualified, or unqualified version of a structure or union
type compatible with the type of the right;
the left operand has atomic, qualified, or unqualified pointer type, and (considering
the type the left operand would have after lvalue conversion) both operands are
pointers to qualified or unqualified versions of compatible types, and the type pointed
to by the left has all the qualifiers of the type pointed to by the right;
the left operand has atomic, qualified, or unqualified pointer type, and (considering
the type the left operand would have after lvalue conversion) one operand is a pointer
to an object type, and the other is a pointer to a qualified or unqualified version of
void, and the type pointed to by the left has all the qualifiers of the type pointed to
by the right;
the left operand is an atomic, qualified, or unqualified pointer, and the right is a null
pointer constant; or
the left operand has type atomic, qualified, or unqualified _Bool, and the right is a pointer.
I am interested in the case in which both sides are pointers to incompatible types different from void. If I understand correctly, this should at the very least invoke UB, as it violates this constraint. One example for incompatible types should be (according to §6.2.7 and §6.7.2) int and double.
Therefore the following program should be in violation:
int main(void) {
int a = 17;
double* p;
p = &a;
(void)p;
}
Both gcc and clang warn about "-Wincompatible-pointer-types", but do not abort compilation (compilation with -std=c11 -Wall -Wextra -pedantic).
Similarly, the following program only leads to a "-Wint-conversion" warning, while compiling just fine.
int main(void) {
int a;
double* p;
p = a;
(void)p;
}
Coming from C++, I expected that either of those test cases would require a cast to compile. Is there any reason why either of the programs would be standards-legal? Or, are there at least significant historic reasons for supporting this code style even when disabling the entertaining GNU C extensions by explicitly using -std=c11 instead of -std=gnu11?

Is there any reason why either of the programs would be standards-legal?
These programs are not "standards-legal". They contain constraint violations and you already quoted the right text from the standard.
The compilers conform to the standard by producing a diagnostic for constraint violation. The standard does not require compilation to abort in the case of a constraint violation or other erroneous program.
It doesn't say in as many words, but the only reasonable conclusion is that any executable generated as a result of a program containing a constraint violation has completely undefined behaviour. (I have seen people try to argue otherwise though).
Speculation follows: C (and C++) are used for many purposes; sometimes people want "high level assembler" for their machine and don't care about portability or standards. Presumably the compiler vendors set the defaults to what they think their target audience would prefer.

The compiler flag (both gcc and clang) to request checks for strict standards conformance and to refuse to compile nonconformant code is -pedantic-errors:
$ gcc -std=c11 -pedantic-errors x.c
x.c: In function ‘main’:
x.c:3:15: error: initialization from incompatible pointer type [-Wincompatible-pointer-types]
double* p = &a;
^
Clang:
$ clang -std=c11 -pedantic-errors x.c
x.c:3:11: error: incompatible pointer types initializing 'double *' with an
expression of type 'int *' [-Werror,-Wincompatible-pointer-types]
double* p = &a;
^ ~~
1 error generated.
A significant proportion (to say the least) of typical C code in the wild is nonconformant, so -pedantic-errors would cause most C programs and libraries to fail to compile.

Your code example and your citation of the standard does not match. The example is initialization and 6.5.16 talks about assignment.
Confusingly the matching-type requirement is in a constraint section 6.5.16 for assignment, but "only" in the semantics section (6.7.9) for for initialization. So the compilers have the "right" not to issue a diagnostic for initialization.
In C, constraint violations only require "diagnostics", the compiler may well continue compilation, but there is no guarantee that the resulting executable is valid.
On my platform, a Debian testing, both compilers give me a diagnostic without any option, so I guess your installation must be quite old and obsolete.

No, Yes
It's really very simple.
No, not standards legal. Yes, significant historical reasons.
C did not originally even have casts. As a system programming language, using it as an ultra-powerful glorified assembler was not only reasonable, it was best-practice and really "only-practice", back in the day.
A key piece of information should shine a light on things: it really is not the compiler's job to either implement or enforce the specification. The specification is, actually, literally, only a suggestion. The compiler's actual job is to compile all the C code that was ever written, including pre-C11, pre-C99, pre-C89, and even pre-K&R. This is why there are so many optional restrictions. Projects with modern code styles turn on strictly conforming modes.
There is the way C is defined in standards, and, there is the way C is used in practice. So you can see, the compiler simply can't refuse to build the code.
Over decades, developers have been shifting to portable, strictly conforming code, but when C first appeared, it was used a bit like a really amazingly powerful assembler, and it was kind of open season on address arithmetic and type punning. Programs in those days were mostly written for one architecture at a time.

Related

IOCCC 1987/westley.c - lvalue issues with GCC [duplicate]

This question already has answers here:
1998 vintage C code now fails to compile under gcc
(2 answers)
Closed 2 years ago.
This line-palindromic entry from the 1987 IOCCC:
https://www.ioccc.org/years.html#1987_westley
...is causing TCC 0.9.27 no issues during default compilation and works as intended.
However, GCC 9.3.0, even in -std=c89 mode, complains that the following instances of (int) (tni) are not lvalues:
for (; (int) (tni);)
(int) (tni) = reviled;
^
(lvalue required as left operand of assignment)
...
for ((int) (tni)++, ++reviled; reviled * *deliver; deliver++, ++(int) (tni))
^~
(lvalue required as increment operand)
(code beautified for better context)
My current thoughts:
In the = case, I suspect that the use of (int) (tni) as a condition in the for loop is disqualifying it as a lvalue, but I am not sure.
In the ++ case, I can see later in that code how its palindromic nature forces the author to use a -- operator between (int) and (tni) which is not considered as an issue. So GCC requires the ++ operator just before the variable, not before its casting, but hints at this requirement with a lvalue complaint.
Is there a definitive answer to these GCC complaints? Is TCC too lax in letting these off the hook?
Thanks in advance!
EDIT: I was kindly pointed towards a similar question which answers the casting issue here - please see my comment below for the solution!
TCC is not a conforming C implementation as is well known - TCC tries to be small and fast compiler that attempts to compile correct C code, and it often does not produce diagnostics that would be required by the standard. And as is known even more widely is that the first C standard came into being in 1989, and most widely known is that year 1987 preceded 1989.
C11 6.5.4p5:
Preceding an expression by a parenthesized type name converts the value of the expression to the named type. This construction is called a cast. 104) A cast that specifies no conversion has no effect on the type or value of an expression.
The footnote 104 notes that:
A cast does not yield an lvalue. Thus, a cast to a qualified type has the same effect as a cast to the unqualified version of the type.
For assignment operator, 6.5.16p2 says:
An assignment operator shall have a modifiable lvalue as its left operand.
6.5.16p2 is in constraint section, so violations must be diagnosed.

Why is this claimed dereferencing type-punned pointer warning compiler-specific?

I've read various posts on Stack Overflow RE: the derefercing type-punned pointer error. My understanding is that the error is essentially the compiler warning of the danger of accessing an object through a pointer of a different type (though an exception appears to be made for char*), which is an understandable and reasonable warning.
My question is specific to the code below: why does casting the address of a pointer to a void** qualify for this warning (promoted to error via -Werror)?
Moreover, this code is compiled for multiple target architectures, only one of which generates the warning/error - might this imply that it is legitimately a compiler version-specific deficiency?
// main.c
#include <stdlib.h>
typedef struct Foo
{
int i;
} Foo;
void freeFunc( void** obj )
{
if ( obj && * obj )
{
free( *obj );
*obj = NULL;
}
}
int main( int argc, char* argv[] )
{
Foo* f = calloc( 1, sizeof( Foo ) );
freeFunc( (void**)(&f) );
return 0;
}
If my understanding, stated above, is correct, a void**, being still just a pointer, this should be safe casting.
Is there a workaround not using lvalues that would pacify this compiler-specific warning/error? I.e. I understand that and why this will resolve the issue, but I would like to avoid this approach because I want to take advantage of freeFunc() NULLing an intended out-arg:
void* tmp = f;
freeFunc( &tmp );
f = NULL;
Problem compiler (one of one):
user#8d63f499ed92:/build$ /usr/local/crosstool/x86-fc3/bin/i686-fc3-linux-gnu-gcc --version && /usr/local/crosstool/x86-fc3/bin/i686-fc3-linux-gnu-gcc -Wall -O2 -Werror ./main.c
i686-fc3-linux-gnu-gcc (GCC) 3.4.5
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
./main.c: In function `main':
./main.c:21: warning: dereferencing type-punned pointer will break strict-aliasing rules
user#8d63f499ed92:/build$
Not-complaining compiler (one of many):
user#8d63f499ed92:/build$ /usr/local/crosstool/x86-rh73/bin/i686-rh73-linux-gnu-gcc --version && /usr/local/crosstool/x86-rh73/bin/i686-rh73-linux-gnu-gcc -Wall -O2 -Werror ./main.c
i686-rh73-linux-gnu-gcc (GCC) 3.2.3
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
user#8d63f499ed92:/build$
Update: I've further discovered the warning appears to be generated specifically when compiled with -O2 (still with the noted "problem compiler" only)
A value of type void** is a pointer to an object of type void*. An object of type Foo* is not an object of type void*.
There is an implicit conversion between values of type Foo* and void*. This conversion may change the representation of the value. Similarly, you can write int n = 3; double x = n; and this has the well-defined behavior of setting x to the value 3.0, but double *p = (double*)&n; has undefined behavior (and in practice will not set p to a “pointer to 3.0” on any common architecture).
Architectures where different types of pointers to objects have different representations are rare nowadays, but they are permitted by the C standard. There are (rare) old machines with word pointers which are addresses of a word in memory and byte pointers which are addresses of a word together with a byte offset in this word; Foo* would be a word pointer and void* would be a byte pointer on such architectures. There are (rare) machines with fat pointers which contain information not only about the address of the object, but also about its type, its size and its access control lists; a pointer to a definite type might have a different representation from a void* which needs additional type information at runtime.
Such machines are rare, but permitted by the C standard. And some C compilers take advantage of the permission to treat type-punned pointers as distinct to optimize code. The risk of pointers aliasing is a major limitation to a compiler's ability to optimize code, so compilers tend to take advantage of such permissions.
A compiler is free to tell you that you're doing something wrong, or to quietly do what you didn't want, or to quietly do what you wanted. Undefined behavior allows any of these.
You can make freefunc a macro:
#define FREE_SINGLE_REFERENCE(p) (free(p), (p) = NULL)
This comes with the usual limitations of macros: lack of type safety, p is evaluated twice. Note that this only gives you the safety of not leaving dangling pointers around if p was the single pointer to the freed object.
A void * is treated specially by the C standard in part because it references an incomplete type. This treatment does not extend to void ** as it does point to a complete type, specifically void *.
The strict aliasing rules say you can't convert a pointer of one type to a pointer of another type and subsequently dereference that pointer because doing so means reinterpreting the bytes of one type as another. The only exception is when converting to a character type which allows you to read the representation of an object.
You can get around this limitation by using a function-like macro instead of a function:
#define freeFunc(obj) (free(obj), (obj) = NULL)
Which you can call like this:
freeFunc(f);
This does have a limitation however, because the above macro will evaluate obj twice. If you're using GCC, this can be avoided with some extensions, specifically the typeof keyword and statement expressions:
#define freeFunc(obj) ({ typeof (&(obj)) ptr = &(obj); free(*ptr); *ptr = NULL; })
Dereferencing a type punned pointer is UB and you can't count on what will happen.
Different compilers generate different warnings, and for this purpose different versions of the same compiler can be considered as different compilers. This seems a better explanation for the variance you see than a dependence on the architecture.
A case which may help you understand why type punning in this case can be bad is that your function won't work on an architecture for which sizeof(Foo*) != sizeof(void*). That is authorized by the standard although I don't know any current one for which this is true.
A workaround would be to use a macro instead of a function.
Note that free accepts null pointers.
This code is invalid per the C Standard, so it might work in some cases, but is not necessarily portable.
The "strict aliasing rule" for accessing a value via a pointer that has been cast to a different pointer type is found in 6.5 paragraph 7:
An object shall have its stored value accessed only by an lvalue expression that has one of the following types:
a type compatible with the effective type of the object,
a qualified version of a type compatible with the effective type of the object,
a type that is the signed or unsigned type corresponding to the effective type of the object,
a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object,
an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or
a character type.
In your *obj = NULL; statement, the object has effective type Foo* but is accessed by the lvalue expression *obj with type void*.
In 6.7.5.1 paragraph 2, we have
For two pointer types to be compatible, both shall be identically qualified and both shall be pointers to compatible types.
So void* and Foo* are not compatible types or compatible types with qualifiers added, and certainly don't fit any of the other options of the strict aliasing rule.
Although not the technical reason the code is invalid, it's also relevant to note section 6.2.5 paragraph 26:
A pointer to void shall have the same representation and alignment requirements as a pointer to a character type. Similarly, pointers to qualified or unqualified versions of compatible types shall have the same representation and alignment requirements. All pointers to structure types shall have the same representation and alignment requirements as each other. All pointers to union types shall have the same representation and alignment requirements as each other. Pointers to other types need not have the same representation or alignment requirements.
As for the differences in warnings, this is not a case where the Standard requires a diagnostic message, so it's just a matter of how good the compiler or its version is at noticing potential issues and pointing them out in a helpful way. You noticed optimization settings can make a difference. This is often because more information is internally generated about how various pieces of the program actually fit together in practice, and that extra information is therefore also available for warning checks.
On top of what the other answers have said, this is a classic anti-pattern in C, and one which should be burned with fire. It appears in:
Free-and-null-out functions like the one you've found the warning in.
Allocation functions that shun the standard C idiom of returning void * (which doesn't suffer from this issue because it involves a value conversion instead of type punning), instead returning an error flag and storing the result via a pointer-to-pointer.
For another example of (1), there was a longstanding infamous case in ffmpeg/libavcodec's av_free function. I believe it was eventually fixed with a macro or some other trick, but I'm not sure.
For (2), both cudaMalloc and posix_memalign are examples.
In neither case does the interface inherently require invalid usage, but it strongly encourages it, and admits correct usage only with an extra temporary object of type void * that defeats the purpose of the free-and-null-out functionality, and makes allocation awkward.
Although C was designed for machines which use the same representation for all pointers, the authors of the Standard wanted to make the language usable on machines that use different representations for pointers to different types of objects. Therefore, they did not require that machines which use different pointer representations for different kinds of pointers support a "pointer to any kind of pointer" type, even though many machines could do so at zero cost.
Before the Standard was written, implementations for platforms that used the same representation for all pointer types would unanimously allow a void** to be used, at least with suitable casting, as a "pointer to any pointer". The authors of the Standard almost certainly recognized that this would be useful on platforms that supported it, but since it couldn't be universally supported they declined to mandate it. Instead, they expected that quality implementation would process such constructs as what the Rationale would describe as a "popular extension", in cases where doing so would make sense.

Assigning pointers to atomic type to pointers to non atomic type

Is the behavior of this code well-defined?
#include <stdatomic.h>
const int test = 42;
const int * _Atomic atomic_int_ptr;
atomic_init(&atomic_int_ptr, &test);
const int ** int_ptr_ptr = &atomic_int_ptr;
printf("int = %d\n", **int_ptr_ptr); //prints int = 42
I assigned a pointer to atomic type to a pointer to non-atomic type (the types are the same). Here are my thoughts of this example:
The Standard explicitly specify distinction of const, volatile and restrict qualifiers from the _Atomic qualifier 6.2.5(p27):
this Standard explicitly uses the phrase ‘‘atomic, qualified or
unqualified type’’ whenever the atomic version of a type is permitted
along with the other qualified versions of a type. The phrase
‘‘qualified or unqualified type’’, without specific mention of atomic,
does not include the atomic types.
Also the compatibility of qualified types is defined as 6.7.3(p10):
For two qualified types to be compatible, both shall have the
identically qualified versionof a compatible type; the order of
type qualifiers within a list of specifiers or qualifiers does
not affect the specified type.
Combining the quotes cited above I concluded that atomic and non-atomic types are compatible types. So, applying the rule of simple assigning 6.5.16.1(p1) (emp. mine):
the left operand has atomic, qualified, or unqualified pointer
type, and (considering the type the left operand would have
after lvalue conversion) both operands are pointers to qualified
or unqualified versions of compatible types, and the type pointed to by
the left has all the qualifiers of the type pointed to by the right;
So I concluded that the behavior is well defined (even in spite of assigning atomic type to a non-atomic type).
The problem with all that is that applying the rules above we can also conclude that simple assignment a non-atomic type to an atomic type is also well defined which is obviously not true since we have a dedicated generic atomic_store function for that.
6.2.5p27:
Further, there is the _Atomic qualifier. The presence of the _Atomic
qualifier designates an atomic type. The size, representation, and
alignment of an atomic type need not be the same as those of the
corresponding unqualified type. Therefore, this Standard explicitly
uses the phrase ''atomic, qualified or unqualified type'' whenever the
atomic version of a type is permitted along with the other qualified
versions of a type. The phrase ''qualified or unqualified type'',
without specific mention of atomic, does not include the atomic types.
I think this should make it clear that atomic-qualified types are not deemed compatible with qualified or unqualified versions of the types they're based on.
C11 allows _Atomic T to have a different size and layout than T, e.g. if it's not lock-free. (See #PSkocik's answer).
For example, the implementation could choose to put a mutex inside each atomic object, and put it first. (Most implementations instead use the address as an index into a table of locks: Where is the lock for a std::atomic? instead of bloating each instance of an _Atomic or std::atomic<T> object that isn't guaranteed lock-free at compile time).
Therefore _Atomic T* is not compatible with T* even in a single-threaded program.
Merely assigning a pointer might not be UB (sorry I didn't put on my language lawyer hat), but dereferencing certainly can be.
I'm not sure if it's strictly UB on implementations where _Atomic T and T do share the same layout and alignment. Probably it violates strict aliasing, if _Atomic T and T are considered different types regardless of whether or not they share the same layout.
alignof(T) might be different from alignof(_Atomic T), but other than an intentionally perverse implementation (Deathstation 9000), _Atomic T will be at least as aligned as plain T, so that's not an issue for casting pointers to objects that already exist. An object that's more aligned than it needs to be is not a problem, just a possible missed-optimization if it stops a compiler from using a single wider load.
Fun fact: creating an under-aligned pointer is UB in ISO C, even without dereference. (Most implementations don't complain, and Intel's _mm_loadu_si128 intrinsic even requires compilers to support doing so.)
In practice on real implementations, _Atomic T* and T* use the same layout / object representation and alignof(_Atomic T) >= alignof(T). A single-threaded or mutex-guarded part of a program could do non-atomic access to an _Atomic object, if you can work around the strict-aliasing UB. Maybe with memcpy.
On real implementations, _Atomic may increase the alignment requirement, e.g. a struct {int a,b;} on most ABIs for most 64-bit ISAs would typically only have 4-byte alignment (max of the members), but _Atomic would give it natural alignment = 8 to allow loading/storing it with a single aligned 64-bit load/store. This of course doesn't change the layout or alignment of the members relative to the start of the object, just the alignment of the object as a whole.
The problem with all that is that applying the rules above we can also conclude that simple assignment a non-atomic type to an atomic type is also well defined which is obviously not true since we have a dedicated generic atomic_store function for that.
No, that reasoning is flawed.
atomic_store(&my_atomic, 1) is equivalent to my_atomic=1;. In the C abstract machine, they both do an atomic store with memory_order_seq_cst.
You can also see this from looking at the code-gen for real compilers on any ISA; e.g. x86 compilers will use an xchg instruction, or mov+mfence. Similarly, shared_var++ compiles to an atomic RMW (with mo_seq_cst).
IDK why there's an atomic_store generic function. Maybe just for contrast / consistency with atomic_store_explicit, which lets you do atomic_store_explicit(&shared_var, 1, memory_order_release) or memory_order_relaxed to do a release or relaxed store instead of sequential-release. (On x86, just a plain store. Or on weakly-ordered ISAs, some fencing but not a full barrier.)
For types that are lock-free, where the object representation of _Atomic T and T are identical, there's no problem in practice accessing an atomic object through a non-atomic pointer in a single-threaded program. I suspect it's still UB, though.
C++20 is planning to introduce std::atomic_ref<T> which will let you do atomic operations on a non-atomic variable. (With no UB as long as no threads are potentially doing non-atomic access to it during the time window of being written.) This is basically a wrapper around the __atomic_* builtins in GCC for example, that std::atomic<T> is implemented on top of.
(This presents some problems, like if atomic<T> needs more alignment than T, e.g. for long long or double on i386 System V. Or a struct of 2x int on most 64-bit ISAs. You should use alignas(_Atomic T) T foo when declaring non-atomic objects you want to be able to do atomic operations on.)
Anyway, I'm not aware of any standards-compliant way to do similar things in portable ISO C11, but it's worth mentioning that real C compilers very much do support doing atomic operations on objects declared without _Atomic. But only using stuff like GNU C atomic builtins.:
See Casting pointers to _Atomic pointers and _Atomic sizes : apparently casting a T* to _Atomic T* is not recommended even in GNU C. Although we don't have a definitive answer that it's actually UB.

Idea behind "[...] makes pointer from integer without a cast"

I always wondered why warnings passing argument 1 from of 'foo' makes pointer from integer without a cast and alike are only warnings and not errors.
Actually these warnings are almost always errors.
Does somebody know what's the idea behind this?
Is it mostly to allow prehistoric code to be compiled without errors?
Or just to comply to the standard? Then latter maybe needs some fixing.
Example:
int foo(int *bar)
{
*bar = 42;
}
void bar()
{
int n = 0;
foo(n); // this is obviously an error
...
}
Per 6.5.2.2 Function Calls, ¶ 7:
If the expression that denotes the called function has a type that does include a prototype, the arguments are implicitly converted, as if by assignment, to the types of the corresponding parameters, taking the type of each parameter to be the unqualified version of its declared type
The relevant text in 6.5.16.1 Simple Assignment is:
Constraints
One of the following shall hold:
the left operand has atomic, qualified, or unqualified arithmetic type, and the right has arithmetic type;
the left operand has an atomic, qualified, or unqualified version of a structure or union type compatible with the type of the right;
the left operand has atomic, qualified, or unqualified pointer type, and (considering the type the left operand would have after lvalue conversion) both operands are pointers to qualified or unqualified versions of compatible types, and the type pointed to by the left has all the qualifiers of the type pointed to by the right;
the left operand has atomic, qualified, or unqualified pointer type, and (considering the type the left operand would have after lvalue conversion) one operand is a pointer to an object type, and the other is a pointer to a qualified or unqualified version of void, and the type pointed to by the left has all the qualifiers of the type pointed to by the right;
the left operand is an atomic, qualified, or unqualified pointer, and the right is a null pointer constant; or
the left operand has type atomic, qualified, or unqualified _Bool, and the right is a pointer.
None of these allow the left operand as a pointer and the right operand as an integer. Thus, such an assignment (and by the first quoted text above, the function call) is a constraint violation. This means the compiler is required by the standard to "diagnose" it. However it's up to the compiler what it does beyond that. Yes, an error would be highly preferable, but just printing a warning is a low-quality way to satisfy the requirement to "diagnose" constraint violations like this.
Does somebody know what's the idea behind this?
Is it mostly to allow prehistoric code to be compiled without errors?
Or just to comply to the standard? Then latter maybe needs some fixing.
It is to comply with the standard in the sense that the standard requires conforming implementations to diagnose such issues, as #R.. describes in his answer. Implementations are not required to reject programs on account of such issues, however. As for why some compilers instead accept such programs, that would need to be evaluated on a per-implementation basis, but this quotation from the first edition of K&R may shed a bit of light:
5.6 Pointers are not Integers
You may notice in older C programs a rather cavalier attitude toward
copying pointers. It has generally been true that on most machines a
pointer may be assigned to an integer and back again; no scaling or
conversion takes place, and no bits are lost. Regrettably, this has
led to the taking of liberties with routines that return pointers
which are then merely passed to other routines -- the requisite
pointer declarations are often left out.
(Kernighan & Ritchie, The C Programming Language, 1st ed., 1978)
Notice in the first place that this long predates even C89. I'm a bit amused today that the authors were then talking about "older" C programs. But note too that even at that time, the C language as defined by K&R did not formally permit implicit conversion between pointers and integers (though it did permit casting between them).
Nevertheless, there were programs that relied on implicit conversion anyway, apparently because it happened to work on the targeted implementations. It was attractive, by some people's standards at the time, in conjunction with primordial C's implicit typing rules. One could let a variable or function intended to return or store a pointer default to type int by omitting its declaration altogether, and as long as it was interpreted as a pointer wherever it ultimately was used, everything usually happened to work as intended.
I'm inclined to guess that everything continuing to work as intended, thereby supporting backwards compatibility, was a consideration for compiler developers in continuing to accept implicit conversions, so that's "allow[ing] prehistoric code to be compiled." I note, however, that these days code with implicit conversions of this kind are much less likely to work as intended than they used to be, for many machines these days have 64-bit pointers but only 32-bit ints.
The behaviour of assigning an arithmetic type to a pointer is not well formed in the C standard. (See the answer provided by R.. for relevant sections.)
Your compiler (or the settings you're using) have decided to treat that as a warning.
Compilers have default settings and often support language extensions and those may be quite liberal.
Notice for anything outside the language specification it's up to the implementers of the compiler to decide what's an error or if they're going to interpret it as a language extension and (hopefully) issue a warning that the the code is off the offical piste.
I agree that's not best. My recommendation would be to treat is an error because it almost certainly is and casting an int to a pointer is the standard supported way of being explicit and getting the same result (e.g. int *n).
I think you're using GCC and it's notorious for "helpfully" compiling things that it could better serve you by rejecting and making you use standard constructs.
Enable all warnings (-Wall on the gcc command-line) and make sure you understand and address them all appropriately.

can gcc warn on assignment of void pointer to more specific pointer?

How can I make gcc warn when a void* is assigned or passed as a parameter to a type that is a more specific kind of pointer, like my_struct* without a cast? I would like to make sure all casting is explicit.
Update: The scope of this question has been extended to non-gcc linters as well.
Update2: Minuses everywhere? I'm flummoxed by the amount of controversy that a simple, purely technical question can generate.
How can I make gcc warn when a void* is assigned or passed as a
parameter to a type that is a more specific kind of pointer
Being able to assign a void * to a more-specific type without a cast is a required part of the C programming language. Per Paragraph 1 of 6.3.2.3 Pointers of the C Standard:
A pointer to void may be converted to or from a pointer to any
object type. A pointer to any object type may be converted to a
pointer to void and back again; the result shall compare equal
to the original pointer.
You're asking to be warned about a required part of C. It's not far removed from asking for a warning when 5 is assigned to an int.
As noted by #MarcGlisse, though, GCC does provide the -Wc++-compat warning option:
-Wc++-compat (C and Objective-C only)
Warn about ISO C constructs that are outside of the common subset of ISO C and ISO C++, e.g. request for implicit conversion from void * to a pointer to non-void type.
As noted by #MarcGlisse, gcc provides the -Wc++-compat warning option. Among other non-C++-compatible constructs, it warns about silent conversion of void*.
#AndrewHenle linked to an answer to another question which indicates that requiring an explicit cast has the downside of increasing the likelihood that incompatible conversion may result, for example, if the programmer accidentally casts a numeric value.
I think that's significantly less of a concern, since by an explicit cast the programmer certifies that they know what they're doing. Neverthless, even that small drawback can be addressed by using the following macro in conjunction with -Wc++-compat:
#define VOID_CAST(T, x) ({ __typeof__(x) void_cast_x __attribute__((__unused__)) = ((__typeof__(x))((void*)(x))); ((T*)(x)); })
With luck, the "useless" assignment will be optimized out and the benefit of using VOID_CAST is that it will generate an error or warning if x is not a void* to begin with.

Resources