Unspecified behaviour about "object having more than one object representation" - c

Still struggling with C (C99) undefined and unspecified behaviours.
This time it is the following Unspecified Behaviour (Annex J.1):
The representation used when storing a value in an object that has
more than one object representation for that value (6.2.6.1).
The corresponding section 6.2.6.1 states:
Where an operator is applied to a value that has more than one object
representation, which object representation is used shall not affect
the value of the result43). Where a value is stored in an object using
a type that has more than one object representation for that value, it
is unspecified which representation is used, but a trap representation
shall not be generated.
with the following note 43:
It is possible for objects x and y with the same effective type T to
have the same value when they are accessed as objects of type T, but
to have different values in other contexts. In particular, if == is
defined for type T, then x == y does not imply that memcmp(&x, &y, sizeof(T)) == 0. Furthermore, x == y does not necessarily imply that x
and y have the same value; other operations on values of type T may
distinguish between them.
I don't even understand what would be a value that has more than one object representation. Is it related for example to a floating point representation of 0 (negative and positive zero) ?

Most of this language is the C standard going well out of its way to allow for continued use on Burroughs B-series mainframes (AFAICT the only surviving ones-complement architecture). Unless you have to work with those, or certain uncommon microcontrollers, or you're seriously into retrocomputing, you can safely assume that the integer types have only one object representation per value, and that they have no padding bits. You can also safely assume that all integer types have no trap representations, except that you must take this line of J.2
[the behavior is undefined if ...] the value of an object with automatic storage duration is used while it is indeterminate
as if it were normative and as if the crossed-out words were not present. (This rule is not supported by a close reading of the actual normative text, but it is nonetheless the rule adopted by all of the current generation of optimizing compilers.)
Concrete examples of types that can have more than one object representation for a value on a modern, non-exotic implementation include:
_Bool: the effect of overwriting a _Bool object with the representation of an integer value other than an appropriately sized 0 or 1 is unspecified.
pointer types: some architectures ignore the low bits of a pointer to a type whose minimum alignment is greater than 1 (e.g. (int*)0x8000_0000 and (int*)0x8000_0001 might be treated as referring to the same int object; this is an intentional hardware feature, facilitating the use of tagged pointers)
floating point types: IEC 60559 allows all of the many representations of NaN to be treated identically (and possibly squashed together) by the hardware. (Note: +0 and −0 are distinct values in IEEE floating point, not different representations of the same value.)
These are also the scalar types that can have trap representations in modern implementations. In particular, Annex F specifically declares the behavior of signaling NaN to be undefined, even though it's well-defined in an abstract implementation of IEC 60559.

I don't even understand what would be a value that has more than one object representation. Is it related for example to a floating point representation of 0 (negative and positive zero) ?
No, negative and positive zero are different values.
In practice, you probably don't need to worry about values with different object representations, but one possible example would involve integer types that include padding bits. For example, suppose your implementation provided a 15-(value-)bit unsigned integer type, whose storage size was 16 bits. Suppose also that the padding bit in the representation of that type were completely ignored for the purpose of evaluating objects (that is, that the type afforded no trap representations). Then each value representable by that type would have two distinct object representations, differing in the value of the padding bit.
The standard says that in such a case, you cannot rely on a particular choice between those value representations to be made under any given circumstances, but also that it doesn't matter when such objects are operands of any C operator. Note 43 clarifies that the difference may nevertheless be felt in other ways.

As you suspected, -0.0 is a good candidate but only for the last phrase:
Furthermore, x == y does not necessarily imply that x and y have the same value; other operations on values of type T may distinguish between them.
double x = 0.0;
double y = -0.0;
if (x == y) {
printf("x and y have the same value\n");
}
if (memcmp(&x, &y, sizeof(double)) {
printf("x and y have a different representation\n");
}
if (1 / x != 1 / y) {
printf("1/x and 1/y have a different value\n");
}
Another example of a value with more than one possible representation is NaN. 0.0 / 0.0 evaluates to a NaN value, which may have a different representation from the one produced by the macro NAN or another operation producing NaN or even the same expression 0.0 / 0.0 evaluated again. memcmp() may show that the representations differ. This example however does not really illustrate the purpose of the Standard's quote in the question as these values do not match per the == operator.
The text you quoted from the Annex J seems to specifically address some rare architectures (nowadays) that have padding bits and/or representations of negative numbers with 2 different representations for 0. All modern systems use two's complement to represent negative numbers, where all bit patterns represent different values, but 4 decades ago you some fairly common mainframes used ones' complement or sign and magnitude where 2 different bit patterns could represent the value 0.

Related

Can a type in C have more than one object representation?

The C99 standard, section 6.2.6.1 8, states:
When an operator is applied to a value that has more than one object
representation, which object representation is used shall not affect
the value of the result (43). Where a value is stored in an object using a
type that has more than one object representation for that value, it
is unspecified which representation is used, but a trap representation
shall not be generated.
I understood object to mean a location (bytes) in memory and value as the interpretation of those bytes based on the type used to access it. If so, then:
How can a value have more than one object representation?
How can a type have more than one object representation for a value?
The standard adds the below in the footnote:
Still, it's not clear to me. Can someone please simplify it for me and explain with examples?
An object is a region of storage (memory) that can contain values of a certain type [C18 3.15].
An object representation are the Bytes that make up the contents of an object [C18 6.2.6.1].
Not every possible combination of Bytes in an object representation also has to correspond to a value of the type (an object representation that doesn't is called a trap representation [C18 3.19.4]).
And not all the Bits in an object representation have to participate in representing a value. Consider the following type:
struct A
{
char c;
int n;
};
Compilers are allowed to (and generally will) insert padding Bytes between the members c and n of this struct to ensure correct alignment of n. These padding Bytes are part of an object of type struct A. They are, thus, part of the object representation. But the values of these padding Bytes do not have any effect on the logical value of type A that is stored in the object.
Let's say we're on a target platform where Bytes consist of 8 Bits, an int consists of 4 Bytes in little endian order, and there are 3 padding Bytes between c and n to ensure that n starts at an offset that is a multiple of 4. The value (struct A){42, 1} may be stored in an object as
2A 00 00 00 01 00 00 00
But it may as well be stored in an object as
2A FF FF FF 01 00 00 00
or whatever else the padding Bytes may happen to be. Each of these sequences of Bytes is a valid object representation of the same logical value of type struct A.
This is also what the footnote is about. If you had two objects x and y that each contained a different object representation of the same value of type struct A, then x == y will evaluate to true while simply performing a memcmp() will not since memcmp() simply compares the bytes of the object representation without any consideration as to what the logical value stored in these objects actually is…
How can a value have more than one object representation?
How can a type have more than one object representation for a value?
Yes, by not having each bit pattern correspond to a different value.
Typically 1 bit pattern is preferred, canonical form, and others rarely generated by normal means.
The x86 extended precision format contains bit patterns that are the same value of other bit patterns - even with the same sign. Research the "pseudo denormal" and "unnormal" bit patterns.
A side effect is that this 80-bit encoding does not realize 280 different values due to this redundancy. (even after accounting for not-a-numbers)
Using 2 double to encode a long double has a similar impact.
Oversimplified example of 2 double representing a long double value:
1000001.0 + 0.0 (canonical form) same value as 1000000.0 + 1.0
Decimal floating point has this issue too.
Because the significand is not normalized, most values with less than 16 significant digits have multiple possible representations; 1×102=0.1×103=0.01×104, etc.
As multiple bit patterns for the same value reduce the gamut of possible numbers, such encodings tend to fall out of favor compared to non-redundant ones. An effect is that we do not seem them as much these days.
A reason for their existence in the first place was to facilitate hardware realizations or simple easy to define (let's explicitly encode the most significant digit for our new FP format - using an implied one is so confusing).
#Eric brought up an interesting comment concerning value and operator that hinges on:
Where an operator is applied to a value that has more than one object representation, which object representation is used shall not affect the value of the result. C17 § 6.2.6.2 8
Given x = +0.0 and y = -0.0, which have the same numeric value of zero, would still qualify as having different values as the operator / distinguishes them as in 1.0/x != 1.0/y.
Still the various FP examples above have many other cases where x,y have different bit pattern yet the same value.
explain with examples?
For example on a compiler that to represent float type uses decimal floating point according to IEEE 754-2008 standard, assuming that stars are properly aligned - CHAR_BIT=8, sizeof(int)==4, floats have width of 32-bits with no padding bits and the compiler uses little endian, the following code (tested with gcc9.2 with -Dfloat=typeof(1.0df)):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
float a, b;
// simulate `a = 314` and `b = 314` with a compiler
// that chose to use different object representation for the same value
memcpy(&a, (int[1]){0x3280013a}, 4);
memcpy(&b, (int[1]){0x32000c44}, 4);
printf("a = %d, b = %d\n", (int)a, (int)b);
printf("a %s b and memcpy(&a, &b) %s 0\n",
a == b ? "==" : "!=",
memcmp(&a, &b, sizeof(a)) == 0? "==" : "!=");
}
should (could) output:
a = 314, b = 314
a == b and memcpy(&a, &b) != 0
A simple example of a value with more than one representation is an IEEE floating point zero. It has a "positive zero" and a "negative zero" representations.
Note An implementation that conforms to IEC 60559 must distinguish between positive and negative zeros, so in such an implementation they are different values rather than different representations of the same value. However an implementation doesn't need to conform to IEC 60559. Such implementations are allowed to e.g. always rerurn the same value for signbit of zero, even though the underlying hardware distinguishes +0 and -0.
On a sign-and-magnitude machine, integer zeros also have more than one representation.
On a segmented architecture like the 16-bit 8086, "long" pointers have more than one representation, for example 0x0000:0x0010 and 0x0001:0x0000 are two representations of the same pointer value.
Finally, in any data type with padding, padding bits do not influence the value. Examples include structs with padding holes.

Is operator ≤ UB for floating point comparison? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
There are numerous reference on the subject (here or here). However I still fails to understand why the following is not considered UB and properly reported by my favorite compiler (insert clang and/or gcc) with a neat warning:
// f1, f2 and epsilon are defined as double
if ( f1 / f2 <= epsilon )
As per C99:TC3, 5.2.4.2.2 §8: we have:
Except for assignment and cast (which remove all extra range and
precision), the values of operations with floating operands and values
subject to the usual arithmetic conversions and of floating constants
are evaluated to a format whose range and precision may be greater
than required by the type. [...]
Using typical compilation f1 / f2 would be read directly from the FPU. I've tried here using gcc -m32, with gcc 5.2. So f1 / f2 is (over-here) on an 80 bits (just a guess dont have the exact spec here) floating point register. There is not type promotion here (per standard).
I've also tested clang 3.5, this compiler seems to cast the result of f1 / f2 back to a normal 64 bits floating point representation (this is an implementation defined behavior but for my question I prefer the default gcc behavior).
As per my understanding the comparison will be done in between a type for which we don't know the size (ie. format whose range and precision may be greater) and epsilon which size is exactly 64 bits.
What I really find hard to understand is equality comparison with a well known C types (eg. 64bits double) and something whose range and precision may be greater. I would have assumed that somewhere in the standard some kind of promotion would be required (eg. standard would mandates that epsilon would be promoted to a wider floating point type).
So the only legitimate syntaxes should instead be:
if ( (double)(f1 / f2) <= epsilon )
or
double res = f1 / f2;
if ( res <= epsilon )
As a side note, I would have expected the litterature to document only the operator <, in my case:
if ( f1 / f2 < epsilon )
Since it is always possible to compare floating point with different size using operator <.
So in which cases the first expression would make sense ? In other word, how could the standard defines some kind of equality operator in between two floating point representation with different size ?
EDIT: The whole confusion here, was that I assumed it was possible to compare two float of different size. Which cannot possibly happen. (thanks #DevSolar!).
<= is well-defined for all possible floating point values.
There is one exception though: the case when at least one of the arguments is uninitialised. But that's more to do with reading an uninitialised variable being UB; not the <= itself
I think you're confusing implementation-defined with undefined behavior. The C language doesn't mandate IEEE 754, so all floating point operations are essentially implementation-defined. But this is different from undefined behavior.
After a bit of chat, it became clear where the miscommunication came from.
The quoted part of the standard explicitly allows an implementation to use wider formats for floating operands in calculations. This includes, but is not limited to, using the long double format for double operands.
The standard section in question also does not call this "type promotion". It merely refers to a format being used.
So, f1 / f2 may be done in some arbitrary internal format, but without making the result any other type than double.
So when the result is compared (by either <= or the problematic ==) to epsilon, there is no promotion of epsilon (because the result of the division never got a different type), but by the same rule that allowed f1 / f2 to happen in some wider format, epsilon is allowed to be evaluated in that format as well. It is up to the implementation to do the right thing here.
The value of FLT_EVAL_METHOD might tell what exactly an implementation is doing exactly (if set to 0, 1, or 2 respectively), or it might have a negative value, which indicates "indeterminate" (-1) or "implementation-defined", which means "look it up in your compiler manual".
This gives an implementation "wiggle room" to do any kind of funny things with floating operands, as long as at least the range / precision of the actual type is preserved. (Some older FPUs had "wobbly" precisions, depending on the kind of floating operation performed. The quoted part of the standard caters for exactly that.)
In no case may any of this lead to undefined behaviour. Implementation-defined, yes. Undefined, no.
The only case where you would get undefined behavior is when a large floating point variable gets demoted to a smaller one which cannot represent the contents. I don't quite see how that applies in this case.
The text you quote is concerned about whether or not floats may be evaluated as doubles etc, as indicated by the text you unfortunately didn't include in the quote:
The use of evaluation formats is characterized by the
implementation-defined value of FLT_EVAL_METHOD:
-1 indeterminable;
0 evaluate all operations and constants just to the range and precision of the type;
1 evaluate operations and constants of type float and double to the range and precision of the double type, evaluate long double operations and constants to the range and precision of the long double type;
2 evaluate all operations and constants to the range and precision of the long double type.
However, I don't believe this macro overwrites the behavior of the usual arithmetic conversions. The usual arithmetic conversions guarantee that you can never compare two float variables of different size. So I don't see how you could run into undefined behavior here. The only possible issue you would have is performance.
In theory, in case FLT_EVAL_METHOD == 2 then your operands could indeed get evaluated as type long double. But please note that if the compiler allows such implicit promotions to larger types, there will be a reason for it.
According to the text you cited, explicit casting will counter this compiler behavior.
In which case the code if ( (double)(f1 / f2) <= epsilon ) is nonsense. By the time you cast the result of f1 / f2 to double, the calculation is already done and have been carried out on long double. The calculation of the result <= epsilon will however be carried out on double since you forced this with the cast.
To avoid long double entirely, you would have to write the code as:
if ( (double)((double)f1 / (double)f2) <= epsilon )
or to increase readability, preferably:
double div = (double)f1 / (double)f2;
if( (double)div <= (double)epsilon )
But again, code like this does only make sense if you know that there will be implicit promotions, which you wish to avoid to increase performance. In practice, I doubt you'll ever run into that situation, as the compiler is most likely far more capable than the programmer to make such decisions.

Are there any (valid) C implementations where float cannot represent the value 0?

If all floats are represented as x = (-1)^s * 2^e * 1.m , there is no way to store zero without support for special cases.
No, all conforming C implementations must support a floating-point value of 0.0.
The floating-point model is described in section 5.2.4.2.2 of the C standard (the link is to a recent draft). That model does not make the leading 1 in the significand (sometimes called the mantissa) implicit, so it has no problem representing 0.0.
Most implementations of binary floating-point don't store the leading 1, and in fact the formula you cited in the question:
x = (-1)^s * 2^e * 1.m
is typically correct (though the way e is stored can vary).
In such implementations, including IEEE, a special-case bit pattern, typically all-bits-zero, is used to represent 0.0.
Following up on the discussion in the comments, tmyklebu argues that not all numbers defined by the floating-point model in 5.2.4.2.2 are required to be representable. I disagree; if not all such numbers are required to be representable, then the model is nearly useless. But even leaving that argument aside, there is an explicit requirement that 0.0 must be representable. N1570 6.7.9 paragraph 10:
If an object that has static or thread storage duration is not
initialized explicitly, then:
...
if it has arithmetic type, it is initialized to (positive or unsigned) zero;
...
This is a very long-standing requirement. A C reference from 1975 (3 years before the publication of K&R1) says:
The initial value of any externally-defined object not explicitly initialized is guaranteed to be 0.
which implies that there must be a representable 0 value. K&R1 (published in 1978) says, on page 198:
Static and external variables which are not initialized are guaranteed
to start off as 0; automatic and register variables which are not
initialized are guaranteed to start off as garbage.
Interestingly, the 1990 ISO C standard (equivalent to the 1989 ANSI C standard) is slightly less explicit than its predecessors and successors. In 6.5.7, it says:
If an object that has static storage duration is not initialized
explicitly, it is initialized implicitly as if every member that has
arithmetic type were assigned 0 and every member that has pointer type
were assigned a null pointer constant.
If a floating-point type were not required to have an exact representation for 0.0, then the "assigned 0" phrase would imply a conversion from the int value 0 to the floating-point type, yielding a small value close to 0.0. Still, C90 has the same floating-point model as C99 and C11 (but with no mention of subnormal or unnormalized values), and my argument above about model numbers still applies. Furthermore, the C90 standard was officially superseded by C99, which in turn was superseded by C11.
After o search a while a found this.
ISO/IEC 9899:201x section 6.2.5, Paragraph 13
Each complex type has the same representation and alignment requirements as an array
type containing exactly two elements of the corresponding real type; the first element is
equal to the real part, and the second element to the imaginary part, of the complex number.
section 6.3.1.7, Paragraph 1
When a value of real type is converted to a complex type, the real part of the complex
result value is determined by the rules of conversion to the corresponding real type and
the imaginary part of the complex result value is a positive zero or an unsigned zero.
So, if i understand this right, any implementations where supports C99 (first C standard with _Complex types), must support a floating-point value of 0.0.
EDIT
Keith Thompson pointed out that complex types are optional in C99, so this argument is pointless.
I believe the following floating-point system is an example of a conforming floating-point arithmetic without a representation for zero:
A float is a 48-bit number with one sign bit, 15 exponent bits, and 32 significand bits. Every choice of sign bit, exponent bits, and significant bits corresponds to a normal floating-point number with an implied leading 1 bit.
Going through the constraints in section 5.2.4.2.2 of the draft C standard Keith Thompson linked:
This floating-point system plainly conforms to paragraphs 1 and 2 of 5.2.4.2.2 in the draft standard.
We only represent normalised numbers; paragraph 3 merely permits us to go farther.
Paragraph 4 is tricky; it says that zero and "values that are not floating-point numbers" may be signed or unsigned. But paragraph 3 didn't force us to have any values that aren't floating-point numbers, so I can't imagine interpreting paragraph 4 as requiring there to be a zero.
The range of representable values in paragraph 5 is apparently -0x1.ffffffffp+16383 to 0x1.ffffffffp+16383.
Paragraph 6 states that +, -, *, / and the math library have implementation-defined accuracy. Still OK.
Paragraph 7 doesn't really constrain this implementation as long as we can find appropriate values for all the constants.
We can set FLT_ROUNDS to 0; this way, I don't even have to specify what happens when addition or subtraction overflows.
FLT_EVAL_METHOD shall be 0.
We don't have subnormals, so FLT_HAS_SUBNORM shall be zero.
FLT_RADIX shall be 2, FLT_MANT_DIG shall be 32, FLT_DECIMAL_DIG shall be 10, FLT_DIG shall be 9, FLT_MIN_EXP shall be -16383, FLT_MIN_10_EXP shall be -4933, FLT_MAX_EXP shall be 16384, and FLT_MAX_10_EXP shall be 4933.
You can work out FLT_MAX, FLT_EPSILON, FLT_MIN, and FLT_TRUE_MIN.

Is using memcmp on array of int strictly conforming?

Is the following program a strictly conforming program in C? I am interested in c90 and c99 but c11 answers are also acceptable.
#include <stdio.h>
#include <string.h>
struct S { int array[2]; };
int main () {
struct S a = { { 1, 2 } };
struct S b;
b = a;
if (memcmp(b.array, a.array, sizeof(b.array)) == 0) {
puts("ok");
}
return 0;
}
In comments to my answer in a different question, Eric Postpischil insists that the program output will change depending on the platform, primarily due to the possibility of uninitialized padding bits. I thought the struct assignment would overwrite all bits in b to be the same as in a. But, C99 does not seem to offer such a guarantee. From Section 6.5.16.1 p2:
In simple assignment (=), the value of the right operand is converted to the type of the assignment expression and replaces the value stored in the object designated by the left operand.
What is meant by "converted" and "replaces" in the context of compound types?
Finally, consider the same program, except that the definitions of a and b are made global. Would that program be a strictly conforming program?
Edit: Just wanted to summarize some of the discussion material here, and not add my own answer, since I don't really have one of my own creation.
The program is not strictly conforming. Since the assignment is by value and not by representation, b.array may or may not contain bits set differently from a.array.
a doesn't need to be converted since it is the same type as b, but the replacement is by value, and done member by member.
Even if the definitions in a and b are made global, post assignment, b.array may or may not contain bits set differently from a.array. (There was little discussion about the padding bytes in b, but the posted question was not about structure comparison. c99 lacks a mention of how padding is initialized in static storage, but c11 explicitly states it is zero initialized.)
On a side note, there is agreement that the memcmp is well defined if b was initialized with memcpy from a.
My thanks to all involved in the discussion.
In C99 §6.2.6
§6.2.6.1 General
1 The representations of all types are unspecified except as stated in this subclause.
[...]
4 [..] Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.
6 When a value is stored in an object of structure or union type, including in a member object, the bytes of the object representation that correspond to any padding bytes take unspecified values.42)
42) Thus, for example, structure assignment need not copy any padding bits.
43) It is possible for objects x and y with the same effective type T to have the same value when they are accessed as objects of type T, but to have different values in other contexts. In particular, if == is defined for type T, then x == y does not imply that memcmp(&x, &y, sizeof (T)) == 0. Furthermore, x == y does not necessarily imply that x and y have the same value; other operations on values of type T may distinguish between them.
§6.2.6.2 Integer Types
[...]
2 For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits;[...]
[...]
5 The values of any padding bits are unspecified.[...]
In J.1 Unspecified Behavior
The value of padding bytes when storing values in structures or unions (6.2.6.1).
[...]
The values of any padding bits in integer representations (6.2.6.2).
Therefore there may be bits in the representation of a and b that differ while not affecting the value. This is the same conclusion as the other answer, but I thought that these quotes from the standard would be good additional context.
If you do a memcpy then the memcmp would always return 0 and the program would be strictly conforming. The memcpy duplicates the object representation of a into b.
My opinion is that it is strictly conforming. According to 4.5 that Eric Postpischil mentioned:
A strictly conforming program shall use only those features of the
language and library specified in this International Standard. It
shall not produce output dependent on any unspecified, undefined, or
implementation-defined behavior, and shall not exceed any minimum
implementation limit.
The behavior in question is the behavior of memcmp, and this is well-defined, without any unspecified, undefined or implementation-defined aspects. It works on the raw bits of the representation, without knowing anything about the values, padding bits or trap representations. Thus the result (but not the functionality) of memcmp in this specific case depends on the implementation of the values stored within these bytes.
Footnote 43) in 6.2.6.2:
It is possible for objects x and y with the same effective type T to
have the same value when they are accessed as objects of type T, but
to have different values in other contexts. In particular, if == is
defined for type T, then x == y does not imply that memcmp(&x, &y,
sizeof (T)) == 0. Furthermore, x == y does not necessarily imply that
x and y have the same value; other operations on values of type T may
distinguish between them.
EDIT:
Thinking it a bit further, I'm not so sure about the strictly conforming anymore because of this:
It shall not produce output dependent on any unspecified [...]
Clearly the result of memcmp depends on the unspecified behavior of the representation, thereby fulfilling this clause, even though the behavior of memcmp itself is well defined. The clause doesn't say anything about the depth of functionality until the output happens.
So it is not strictly conforming.
EDIT 2:
I'm not so sure that it will become strictly conforming when memcpy is used to copy the struct. According to Annex J, the unspecified behavior happens when a is initialized:
struct S a = { { 1, 2 } };
Even if we assume that the padding bits won't change and memcpy always returns 0, it still uses the padding bits to obtain its result. And it relies on the assumption that they won't change, but there is no guarantee in the standard about this.
We should differentiate between paddings bytes in structs, used for alignment, and padding bits in specific native types like int. While we can safely assume that the padding bytes won't change, but only because there is no real reason for it, the same does not apply for the padding bits. The standard mentions a parity flag as an example of a padding bit. This may be a software function of the implementation, but it may as well be a hardware function. Thus there may be other hardware flags used for the padding bits, including one that changes on read accesses for whatever reason.
We will have difficulties in finding such an exotic machine and implementation, but I see nothing that forbid this. Correct me if I'm wrong.

Problems casting NAN floats to int

Ignoring why I would want to do this, the 754 IEEE fp standard doesn't define the behavior for the following:
float h = NAN;
printf("%x %d\n", (int)h, (int)h);
Gives: 80000000 -2147483648
Basically, regardless of what value of NAN I give, it outputs 80000000 (hex) or -2147483648 (dec). Is there a reason for this and/or is this correct behavior? If so, how come?
The way I'm giving it different values of NaN are here:
How can I manually set the bit value of a float that equates to NaN?
So basically, are there cases where the payload of the NaN affects the output of the cast?
Thanks!
The result of a cast of a floating point number to an integer is undefined/unspecified for values not in the range of the integer variable (±1 for truncation).
Clause 6.3.1.4:
When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.
If the implementation defines __STDC_IEC_559__, then for conversions from a floating-point type to an integer type other than _BOOL:
if the floating value is infinite or NaN or if the integral part of the floating value exceeds the range of the integer type, then the "invalid" floating-
point exception is raised and the resulting value is unspecified.
(Annex F [normative], point 4.)
If the implementation doesn't define __STDC_IEC_559__, then all bets are off.
There is a reason for this behavior, but it is not something you should usually rely on.
As you note, IEEE-754 does not specify what happens when you convert a floating-point NaN to an integer, except that it should raise an invalid operation exception, which your compiler probably ignores. The C standard says the behavior is undefined, which means not only do you not know what integer result you will get, you do not know what your program will do at all; the standard allows the program to abort or get crazy results or do anything. You probably executed this program on an Intel processor, and your compiler probably did the conversion using one of the built-in instructions. Intel specifies instruction behavior very carefully, and the behavior for converting a floating-point NaN to a 32-bit integer is to return 0x80000000, regardless of the payload of the NaN, which is what you observed.
Because Intel specifies the instruction behavior, you can rely on it if you know the instruction used. However, since the compiler does not provide such guarantees to you, you cannot rely on this instruction being used.
First, a NAN is everything not considered a float number according to the IEEE standard.
So it can be several things. In the compiler I work with there is NAN and -NAN, so it's not about only one value.
Second, every compiler has its isnan set of functions to test for this case, so the programmer doesn't have to deal with the bits himself. To summarize, I don't think peeking at the value makes any difference. You might peek the value to see its IEEE construction, like sign, mantissa and exponent, but, again, each compiler gives its own functions (or better say, library) to deal with it.
I do have more to say about your testing, however.
float h = NAN;
printf("%x %d\n", (int)h, (int)h);
The casting you did trucates the float for converting it to an int. If you want to get the
integer represented by the float, do the following
printf("%x %d\n", *(int *)&h, *(int *)&h);
That is, you take the address of the float, then refer to it as a pointer to int, and eventually take the int value. This way the bit representation is preserved.

Resources