I am currently working on a C project that needs to be fairly portable among different building environments. The project targets POSIX-compliant systems on a hosted C environment.
One way to achieve a good degree of portability is to code under conformance to a chosen standard, but it is difficult to determine whether a given translation unit is strict-conformant to ISO C. For example, it might violate some translation limits, or it might be relying on an undefined behavior, without any diagnostic message from the compilation environment. I am not even sure whether it is possible to check for strict conformance of large projects.
With that in mind, is there any compiler, tool or method to test for strict ISO C conformance under a given standard (for example, C89 or C99) of a translation unit?
Any help is appreciated.
It is not possible in general to find undefined run-time behavior. For example, consider
void foo(int *p, int *q)
{
*p = (*q)++;
...
which is undefined if p == q. Whether that can happen can't be determined ahead of time without solving the halting problem.
(Edited to fix mistake caf pointed out. Thanks, caf.)
Not really. The C standard doesn't set any absolute minimum limits on translation units that must be accepted. As such, a perfectly accurate checker would be trivial to write, but utterly useless in practice:
#include <stdio.h>
int main(int argc, char **argv) {
int i;
for (i=1; i<argc; i++)
fprintf(stderr, "`%s`: Translation limit (potentially) exceeded.\n", argv[i]);
return 0;
}
Yes, this rejects everything, no matter how trivial. That is in accordance with the standard. As I said, it's utterly useless in practice. Unfortunately, you can't really do a whole lot better -- when you decide to port to a different implementation, you could run into some oddball resource limit you've never seen before, so any code you write (up to an including "hello world") could potentially exceed a resource limit despite being allowed by dozens or even hundreds of compilers on/for much smaller systems.
Edit:
Why a "hello world" program isn't strictly conforming
First, it's worth re-stating the definition of "strictly conforming": "A strictly conforming program shall use only those features of the language and library specified in this International Standard.2) It shall not produce output dependent on any unspecified, undefined, or implementation-defined behavior, and shall not exceed any minimum implementation limit."
There are actually a number of reasons "Hello, World" isn't strictly conforming. First, as implied above, the minimum requirements for implementation limits are completely meaningless -- although there has to be some program that meets certain limits that will be accepted, no other program has to be accepted, even if it doesn't even come close to any of those limits. Given the way the requirement is stated, it's open to question (at best) whether there is any such thing as a program that doesn't exceed any minimum implementation limit, because the standard doesn't really define any minimum implementation limits.
Second, during phase 1 of translation: "Physical source file multibyte characters are mapped, in an implementation defined manner, to the source character set ... " (§5.1.1.2/1). Since "Hello, World!" (or whatever variant you prefer) is supplied as a string literal in the source file, it can be (is) mapped in an implementation-defined manner to the source character set. An implementation is free to decide that (for an idiotic example) string literals will be ROT13 encoded, and as long as that fact is properly documented, it's perfectly legitimate.
Third, the output is normally written via stdout. stdout is a text stream. According to the standard: "Characters may have to be added, altered, or deleted on input and output to conform to differing conventions for representing text in the host environment. Thus, there need not be a one-to-one correspondence between the characters in a stream and those in the external representation." (§7.19.2/2) As such, an implementation could (for example) do Huffman compression on the output (on Monday, Wednesday, or Friday).
So, we have (at least) three distinct points at which the output from a "Hello, World!" depends on implementation-defined characteristics -- any one of which would prevent it from fitting the definition of a strictly conforming program.
gcc has warning levels that will attempt to pin down various aspects of ANSI conformance. But hat's only a starting point.
You might start with gcc -std=c99, or gcc -ansi -pedantic.
Good luck with that. Try to avoid signed integers, because:
int f(int x)
{
return -x;
}
can invoke UB.
Related
I can understand that:
One of the origins of the UB is a performance increase (e.g. by removing never executed code, such as if (i+1 < i) { /* never_executed_code */ }; UPD: if i is a signed integer).
UB can be triggered at compile time because C does not clearly distinguish between compile time and run time. The "whole language is based on the (rather unhelpful) concept of an "abstract machine" (link).
However, I cannot understand yet why C preprocessor is a subject of undefined behavior? It is known that preprocessing directives are executed at compile time.
Consider C11, 6.10.3.3 The ## operator, 3:
If the result is not a valid preprocessing token, the behavior is undefined.
Why not make it a constraint? For example:
The result shall be a valid preprocessing token.
The same question goes for all the other "the behavior is undefined" in 6.10 Preprocessing directives.
Why is the C preprocessor a subject of undefined behavior?
When the C standard was created, there were some existing C preprocessors and there was some imaginary ideal C preprocessor in the minds of standardization committee members.
So there were these gray areas, where committee members weren't completely sure what would they want to do and/or existing C preprocessor implementations differed which each other in behavior.
So, these cases are not defined behavior. Because the C committee members are not completely sure what the behavior actually should be. So there is no requirement on what it should be.
One of the origins of the UB
Yes, one of.
UB may exist to ease up implementing the language. Like for example, in case of the preprocessor, the preprocessor writers don't have to care about what happens when an invalid preprocessor token is a result of ##.
Or UB may exist to reconcile existing implementations with different behaviors or as a point for extensions. So a preprocessor that segfaults in case of UB, a preprocessor that accepts and works in case of UB, and a preprocessor that formats your hard drive in case of UB, all can be standard conformant (but I wouldn't want to work on that one that formats your drive).
Suppose a file which is read in via include directive ends with the partial line:
#define foo bar
Depending upon the design of the preprocessor, it's possible that the partial token bar might be concatenated to whatever appears at the start of the line following the #include directive, or that whatever appears on that line will behave as though it were placed on the line with the #define directive, but with a whitespace separating it from the token bar, and it would hardly be inconceivable that a build script might rely upon such behaviors. It's also possible that implementations might behave as though a newline were inserted at the end of the included file, or might ignore the last partial line of such a file.
Any code which relied upon one of the former behaviors would clearly have been non-portable, but if code exploited such behavior to do something that would otherwise not be practical, such code would hardly be "erroneous", and the authors of the Standard would not have wanted to forbid an implementation that would process it usefully from continuing to do so.
When the Standard uses the phrase "non-portable or erroneous", that does not mean "non-portable, therefore erroneous". Prior to the publication of C89, C implementations defined many useful constructs, but none of them were defined by "the C Standard" since there wasn't one. If an implementation defined the behavior of some construct, some didn't, and the Standard left the construct as "Undefined", that would simply preserve the status quo where implementations that chose to define a useful behavior would do so, those that chose not to wouldn't, and programs that relied upon such behaviors would be "non-portable", working correctly on implementations that supported the behaviors, but not on those that didn't.
Without getting into specifics, my guess is, there exist several preprocessor implementations which have bugs, but the Standard doesn't want to declare them non-conforming, for compatibility reasons.
In human language: if you write a program which has X in it, preprocessor does weird stuff.
In standardese: the behavior of program with X is undefined.
If the standard says something like "The result shall be a valid preprocessing token", it might be unclear what "shall" means in this context.
The programmer shall write the program so this condition holds? If so, the wording with "undefined behavior" is clearer and more uniform (it appears in other places too)
The preprocessor shall make sure this condition holds? If so, this requires dedicated logic which checks the condition; may be impractical to implement.
ISO/IEC 9899:202x (E) working draft — December 11, 2020 N2596, footnote 9:
... an implementation is free to produce any number of diagnostic messages, often referred to as warnings, as long as a valid program is still correctly translated. It can also successfully translate an invalid program.
Searching the definition of "valid / invalid program" across the standard gives no results. In fact the footnote 9 is the only place where "valid / invalid program" is mentioned.
Note: yes:
In ISO standards, notes are without exception non-normative.
Source: https://www.iso.org/schema/isosts/v1.0/doc/n-6ew0.html.
However, people do frequently use the term "valid / invalid program".
Can someone please help to suggest / deduce the definition (relative to the standard) of the term "valid program"?
The question may look silly at the first glance. However, there are cases when people have different understandings of the term "valid program". Hence, misinterpretations occur.
My guess: valid program -- a program which does not violate any syntax rule or constraint.
Note: "semantics rule" is intentionally not included in this definition because per Rice's theorem "non-trivial semantic properties of programs are undecidable".
Is such definition appropriate? If no, then what it the appropriate definition?
At least in older versions of the Standard, a Conforming C Program is any source text which is accepted by at least one Conforming C Implementation somewhere in the universe. Given that conforming implementations are allowed to extend the language to accept almost any arbitrary source text, including programs that contain constraint violations, provided that they only accept the latter after having issued at least one diagnostic, the question of whether any particular source text is a Conforming C Program is determined by the existence or non-existence of implementations that accept it, rather than by any trait of the source text itself.
Your assumption that a valid program may not violate any constraints is correct. And so is your assumption that correctness is impossible hard to prove via static analysis, but can only be attested to a specific execution pass.
It's the definition of the "invalid program" which is fuzzy. A program can still be valid for a limited set of inputs, so you can't label the program invalid entirely. Only programs which are invalid for every possible input are invalid as a whole. Likewise, only a program which is valid for every possible input is "truly valid". In reality, there is hardly any non-trivial program which would not have edge cases where it's still invalid.
To sum that up into a formal definition:
A program is valid if there is at least a single possible input for which no constraints are violated.
A program is invalid only if it violates constraints for all possible inputs.
And please don't confuse valid/invalid with correct/incorrect. Criteria for the later is correctness for all possible inputs.
Follow-up question for: If "shall / shall not" requirement is violated, then does it matter in which section (e.g. Semantics, Constraints) such requirement is located?.
ISO/IEC 9899:202x (E) working draft— December 11, 2020 N2596, 5.1.1.3 Diagnostics, 1:
A conforming implementation shall produce at least one diagnostic message (identified in an
implementation-defined manner) if a preprocessing translation unit or translation unit contains a
violation of any syntax rule or constraint, even if the behavior is also explicitly specified as undefined or implementation-defined. Diagnostic messages need not be produced in other circumstances.
Consequence: semantics violation does not require diagnostics.
Question: what is the (possible) rationale for "semantics violation does not require diagnostics"?
A possible rationale is given by Rice's theorem : non-trivial semantic properties of programs are undecidable
For example, division by zero is a semantics violation; and you cannot decide, by static analysis alone of the C source code, that it won't happen...
A standard cannot require total detection of such undefined behavior, even if of course some tools (e.g. Frama-C) are sometimes capable of detecting them.
See also the halting problem. You should not expect a C compiler to solve it!
The C99 rationale v5.10 gives this explanation:
5.1.1.3 Diagnostics
By mandating some form of diagnostic message for any program containing a syntax error or
constraint violation, the Standard performs two important services. First, it gives teeth to the
concept of erroneous program, since a conforming implementation must distinguish such a program from a valid one. Second, it severely constrains the nature of extensions permissible to
a conforming implementation.
The Standard says nothing about the nature of the diagnostic message, which could simply be
“syntax error”, with no hint of where the error occurs. (An implementation must, of course,
describe what translator output constitutes a diagnostic message, so that the user can recognize it as such.) The C89 Committee ultimately decided that any diagnostic activity beyond this level is
an issue of quality of implementation, and that market forces would encourage more useful
diagnostics. Nevertheless, the C89 Committee felt that at least some significant class of errors
must be diagnosed, and the class specified should be recognizable by all translators.
This happens because the grammar of the C language is context-sensitive and for all the languages that are defined with context-free or more complex grammars on the Chomsky hierarchy one must do a tradeoff between the semantics of the language and its power.
C designers chose to allow much power for the language and this is why the problem of undecidability is omnipresent in C.
There are languages like Coq that try to cut out the undecidable situations and they restrict the semantics of the recursive functions (they allow only sigma(primitive) recursivity).
The question of whether an implementation provides any useful diagnostics in any particular situation is a Quality of Implementation issue outside the Standard's jurisdiction. If an implementation were to unconditionally output "Warning: this program does not output any useful diagnostics" or even "Warning: water is wet", such output would fully satisfy all of the Standard's requirements with regard to diagnostics even if the implementation didn't output any other diagnostics.
Further, the authors of the Standard characterized as "Undefined Behavior" many actions which they expected would be processed in a meaningful and useful fashion by many if not most implementations. According to the published Rationale document, Undefined Behavior among other things "identifies areas of conforming language extension", since implementations are allowed to specify how they will behave in cases that are not defined by the Standard.
Having implementations issue warnings about constructs which were non-portable, but which they would process in a useful fashion would have been annoying.
Prior to the Standard, some implementations would usefully accept constructs like:
struct foo {
int *p;
char pad [4-sizeof (int*)];
int q,r;
};
for all sizes of pointer up to four bytes (8-byte pointers weren't a thing back then), rather than squawking if pointers were exactly four bytes, but some people on the Committee were opposed to the idea of accepting declarations for zero-sized arrays. Thus, a compromise was reached where compilers would squawk about such things, programmers would ignore the useless warnings, and the useful constructs would remain usable on implementations that supported them.
While there was a vague attempt to distinguish between constructs that should produce warnings that programmers could ignore, versus constructs that might be used so much that warnings would be annoying, the fact that issuance of useful diagnostics was a Quality of Implementation issue outside the Standard's jurisdiction meant there was no real need to worry too much about such distinctions.
Does the following code have defined beavior:
#include <stdio.h>
int main() {
int x;
if (scanf("%x", &x) == 1) {
printf("decimal: %d\n", x);
}
return 0;
}
clang compiles it without any warnings even with all warnings enabled, including -pedantic. The C Standard seems unambiguous about this:
C17 7.21.6.2 The fscanf function
...
... the result of the conversion is placed in the object pointed to by the first argument following the format argument that has not already received a conversion result. If this object does not have an appropriate type, or if the result of the conversion cannot be represented in the object, the behavior is undefined.
...
The conversion specifiers and their meanings are:
...
x Matches an optionally signed hexadecimal integer, whose format is the same as expected for the subject sequence of the strtoul function with the value 16 for the base argument. The corresponding argument shall be a pointer to unsigned integer.
On two's complement architectures, converting -1 with %x seems to work, but it would not on ancient sign/magnitude or ones complement systems.
Is there any provision to make this behavior defined or at least implementation defined?
This falls in the category of behaviors which quality implementations should support unless they document a good reason for doing otherwise, but which the Standard does not mandate. The authors of the Standard seem to have refrained from trying to list all such behaviors, and there are at least three good reasons for that:
Doing so would have made the Standard longer, and spending ink describing obvious behaviors that readers would expect anyway would distract from the places where the Standard needed to call readers' attention to things that they might not otherwise expect.
The authors of the Standard may not have wanted to preclude the possibility that an implementation might have a good reason for doing something unusual. I don't know whether that was a consideration in your particular case, but it could have been.
Consider, for example, a (likely theoretical) environment whose calling convention that requires passing information about argument types fed to variadic functions, and that supplies a scanf function that validates those argument types and squawks if int* is passed to a %X argument. The authors of the Standard were almost certainly not aware of any such environment [I doubt any ever existed], and thus would be in no position to weigh the benefits of using the environment's scanf routine versus the benefits of supporting the common behavior. Thus, it would make sense to leave such judgment up to people who would be in a better position to assess the costs and benefits of each approach.
It would be extremely difficult for the authors of the Standard to ensure that they exhaustively enumerated all such cases without missing any, and the more exhaustively they were to attempt to enumerate such cases, the more likely it would be that accidental omissions would be misconstrued as deliberate.
In practice, some compiler writers seem to regard most situations where the Standard fails to mandate the behavior of some action as an invitation to assume code will never attempt it, even if all implementations prior to the Standard had behaved consistently and it's unlikely there would ever be any good reason for an implementation to do otherwise. Consequently, using %X to read an int falls in the category of behaviors that will be reliable on implementations that make any effort to be compatible with common idioms, but could fail on implementations whose designers place a higher value on being able to process useless programs more efficiently, or on implementations that are designed to squawk when given programs that could be undermined by such implementations.
The Wikipedia article on ANSI C says:
One of the aims of the ANSI C standardization process was to produce a superset of K&R C (the first published standard), incorporating many of the unofficial features subsequently introduced. However, the standards committee also included several new features, such as function prototypes (borrowed from the C++ programming language), and a more capable preprocessor. The syntax for parameter declarations was also changed to reflect the C++ style.
That makes me think that there are differences. However, I didn't see a comparison between K&R C and ANSI C. Is there such a document? If not, what are the major differences?
EDIT: I believe the K&R book says "ANSI C" on the cover. At least I believe the version that I have at home does. So perhaps there isn't a difference anymore?
There may be some confusion here about what "K&R C" is. The term refers to the language as documented in the first edition of "The C Programming Language." Roughly speaking: the input language of the Bell Labs C compiler circa 1978.
Kernighan and Ritchie were involved in the ANSI standardization process. The "ANSI C" dialect superceded "K&R C" and subsequent editions of "The C Programming Language" adopt the ANSI conventions. "K&R C" is a "dead language," except to the extent that some compilers still accept legacy code.
Function prototypes were the most obvious change between K&R C and C89, but there were plenty of others. A lot of important work went into standardizing the C library, too. Even though the standard C library was a codification of existing practice, it codified multiple existing practices, which made it more difficult. P.J. Plauger's book, The Standard C Library, is a great reference, and also tells some of the behind-the-scenes details of why the library ended up the way it did.
The ANSI/ISO standard C is very similar to K&R C in most ways. It was intended that most existing C code should build on ANSI compilers without many changes. Crucially, though, in the pre-standard era, the semantics of the language were open to interpretation by each compiler vendor. ANSI C brought in a common description of language semantics which put all the compilers on an equal footing. It's easy to take this for granted now, some 20 years later, but this was a significant achievement.
For the most part, if you don't have a pre-standard C codebase to maintain, you should be glad you don't have to worry about it. If you do--or worse yet, if you're trying to bring an old program up to more modern standards--then you have my sympathies.
There are some minor differences, but I think later editions of K&R are for ANSI C, so there's no real difference anymore.
"C Classic" for lack of a better terms had a slightly different way of defining functions, i.e.
int f( p, q, r )
int p, float q, double r;
{
// Code goes here
}
I believe the other difference was function prototypes. Prototypes didn't have to - in fact they couldn't - take a list of arguments or types. In ANSI C they do.
function prototype.
constant & volatile qualifiers.
wide character support and internationalization.
permit function pointer to be used without dereferencing.
Another difference is that function return types and parameter types did not need to be defined. They would be assumed to be ints.
f(x)
{
return x + 1;
}
and
int f(x)
int x;
{
return x + 1;
}
are identical.
The major differences between ANSI C and K&R C are as follows:
function prototyping
support of the const and volatile data type qualifiers
support wide characters and internationalization
permit function pointers to be used without dereferencing
ANSI C adopts c++ function prototype technique where function definition and declaration include function names,arguments' data types, and return value data types. Function prototype enable ANSI C compiler to check for function calls in user programs that pass invalid numbers of arguments or incompatible arguments data types. These fix major weakness of the K&R C compiler.
Example: to declares a function foo and requires that foo take two arguments
unsigned long foo (char* fmt, double data)
{
/*body of foo */
}
FUNCTION PROTOTYPING:ANSI C adopts c++ function prototype technique where function definaton and declaration include function names,arguments t,data types and return value data types.function prototype enable ANSI ccompilers to check for function call in user program that passes invalid number number of argument or incompatiblle argument data types.these fix a major weakness of the K&R C compilers:invalid call in user program often passes compilation but cause program to crash when they are executed
The difference is:
Prototype
wide character support and internationalisation
Support for const and volatile keywords
permit function pointers to be used as dereferencing
A major difference nobody has yet mentioned is that before ANSI, C was defined largely by precedent rather than specification; in cases where certain operations would have predictable consequences on some platforms but not others (e.g. using relational operators on two unrelated pointers), precedent strongly favored making platform guarantees available to the programmer. For example:
On platforms which define a natural ranking among all pointers to all objects, application of the relational operators to arbitrary pointers could be relied upon to yield that ranking.
On platforms where the natural means of testing whether one pointer is "greater than" another never has any side-effect other than yielding a true or false value, application of the relational operators to arbitrary pointers could likewise be relied upon never to have any side-effects other than yielding a true or false value.
On platforms where two or more integer types shared the same size and representation, a pointer to any such integer type could be relied upon to read or write information of any other type with the same representation.
On two's-complement platforms where integer overflows naturally wrap silently, an operation involving an unsigned values smaller than "int" could be relied upon to behave as though the value was unsigned in cases where the result would be between INT_MAX+1u and UINT_MAX and it was not promoted to a larger type, nor used as the left operand of >>, nor either operand of /, %, or any comparison operator. Incidentally, the rationale for the Standard gives this as one of the reasons small unsigned types promote to signed.
Prior to C89, it was unclear to what lengths compilers for platforms where the above assumptions wouldn't naturally hold might be expected to go to uphold those assumptions anyway, but there was little doubt that compilers for platforms which could easily and cheaply uphold such assumptions should do so. The authors of the C89 Standard didn't bother to expressly say that because:
Compilers whose writers weren't being deliberately obtuse would continue doing such things when practical without having to be told (the rationale given for promoting small unsigned values to signed strongly reinforces this view).
The Standard only required implementations to be capable of running one possibly-contrived program without a stack overflow, and recognized that while an obtuse implementation could treat any other program as invoking Undefined Behavior but didn't think it was worth worrying about obtuse compiler writers writing implementations that were "conforming" but useless.
Although "C89" was interpreted contemporaneously as meaning "the language defined by C89, plus whatever additional features and guarantees the platform provides", the authors of gcc have been pushing an interpretation which excludes any features and guarantees beyond those mandated by C89.
The biggest single difference, I think, is function prototyping and the syntax for describing the types of function arguments.
Despite all the claims to the contary K&R was and is quite capable of providing any sort of stuff from low down close to the hardware on up.
The problem now is to find a compiler (preferably free) that can give a clean compile on a couple of millions of lines of K&R C without out having to mess with it.And running on something like a AMD multi core processor.
As far as I can see, having looked at the source of the GCC 4.x.x series there is no simple hack to reactivate the -traditional and -cpp-traditional lag functionality to their previous working state without without more effor than I am prepered to put in. And simpler to build a K&R pre-ansi compiler from scratch.