I am currently working on an embedded microcontroller and use a custom printf routine. The toolchain is the GCC Toolchain for the AVR32 architecture.
I have the problem that upon calling vsnprintf or similar for the second time that the CPU enters an exception condition.
From support, I received the answer that:
We could not find any obvious reason for such behavior. However, creating a float overflow condition by writing byte by byte is not safe. We cannot ensure the value generated by this and it is recommended to check using “FLT_MAX”.
Now I am wondering: What are "illegal" float values? Shouldn't all bit combinations represent at least some value? If relevant: sizeof(float) is 4 bytes.
Summary
I suggest you print the bits of floating-point values as if they were a hexadecimal integer, as shown in code below, so that you can analyze those bits to see if they contain the values you are attempting to compute or have been modified improperly due to some bug.
Details
The AVR32CU Technical Reference Manual says “The floating point hardware conforms to the requirements of the C standard, which is based on the IEEE 754 floating point standard.“ The latter clause is false; the C standard is not based on IEEE 754. The C standard does specify bindings to IEEE 754 (via the name IEC 60559) as an optional feature of C implementations. I will presume that the model of AVR32 CPU you are using conforms to IEEE 754 to some degree.
There are no “illegal” values in IEEE 754. There are values that do not represent numbers, and some of those values are intended to cause exceptions. Such a value is called a NaN (for “Not a Number”). There are quiet NaNs and signaling NaNs. Quiet NaNs are intended to pass through operations silently, producing a NaN result. E.g., 3 + NaN should produce NaN. Signaling NaNs are intended to cause exceptions, which may cause changes to program control (such as signals or program aborts).
The technical reference manual cited above also says “Signalling NaN are not provided, all NaN are non-signalling (quiet).”
A good vsnprintf routine should accept quiet NaN values for printing and should format them by producing a string such as “NaN”. When a signaling NaN is passed for formatting, I suppose it might be reasonable either to format it or to produce an exception.
I expect the message you received from support is suggesting that your software created some kind of NaN, and that vsnprintf cannot handle these. From the phrasing, I think their response is speculative.
If you are creating floating-point values by assembling bytes, then you may have created a NaN when you did not intend to, if there was some error in your software. I suggest that you debug this by using vsnprintf to print the bytes of the floating-point value instead of printing it with a floating-point format specifier.
If the GCC version you are using has the usual features of GCC, and the unsigned int in your implementation is 32 bits, you can format the bits of a 32-bit float value x as a hexadecimal value using:
vsnprintf(Buffer, BufferLength, "0x%x",
(union { float f; unsigned int u; }) {x} .u);
The second line uses a compound literal to put the value x into a union and reinterpret its bytes as an unsigned int. (This is a supported way in C to reinterpret the bytes of an object. Many people use pointer aliasing, which works in GCC if the appropriate flag is used, but it is not generally supported by the C standard. Another supported method is to copy the bytes, as with unsigned int u; memcpy(&u, &x, sizeof u);.)
Once you see what the bits in the float are, you can interpret them manually from information in the IEEE 754 standard or using an online analyzer. (Select the “hexadecimal” button to input hexadecimal values to be interpreted.)
In an IEEE-754 32-bit binary floating-point object, the value is a NaN if:
Bits 31 has any value. (It is the sign bit, irrelevant for recognizing a NaN.)
Bits 30 to 23 are all ones.
Bits 22 to 0 are not all zeros.
(If Bits 30 to 23 are all ones but bits 22 to 0 are all zeroes, the value is an infinity. This is not illegal but might also cause a low-quality vsnprintf to generate an exception.)
i didnt work on AVR32, but 'illegal' float values, generally float (single-precision arithmetic) is important topic in numeric methods. Maximal number for float is:
FLT_MAX = 3.40282e+38
but float have also limit for floating values. The closer to zero you are, more floating digits you can specify.
for example:
the minimal value between [1,2] is 1.19209e-07 ( it is 2^-23) also known as macheps (machine epsilon ( FLT_EPSILON from float.h ))
the minimal value between [2,4] is 2 * 1.19209e-07 = 2 * 2^-23
it also works for the oder side:
the minimal value between [1/2,1] is 2^-24.
why this is happening?
Let define number as beforedot.afterdot.
the larger number is, more bits is required to write beforedot number, and simetric for less numbers.
In conclusion:
Min for float: 1.0842e-19,
Max for float: 3.40282e+38.
Related
Although there are some answers are on this websites, I still can't figure out the meaning of sizeof(long double). Why is the output of printing var3 is 3.141592653589793115998?
When I try to execute codes from another person, it runs different from another person. Could somebody help me to solve this problem?
My testing codes:
float var1 =3.1415926535897932;
double var2=3.1415926535897932;
long double var3 =3.141592653589793213456;
printf("%d\n",sizeof(float));
printf("%d\n",sizeof(double));
printf("%d\n",sizeof(long double));
printf("%.16f\n",var1);
printf("%.16f\n",var2);
printf("%.21Lf\n",var3);
output of my testing codes:
4
8
16
3.1415927410125732
3.1415926535897931
3.141592653589793115998
Codes are the same with another person, but the output from another person is:
4
8
12
3.1415927410125732
3.1415926535897931
3.141592741012573213359
Could somebody tell me why the output of us are different?
Floating point numbers -inside our computers- are not mathematical real numbers.
They have lots of counter-intuitive properties (e.g. (1.0-x) + x in your C code can be different of 1....). For more, read the floating-point-gui.de
Be also aware that a number is not its representation in digits. For example, most of your examples are approximations of the number π (which, intuitively speaking, has an infinite number of digits or bits, since it is a trancendental number, as proven by Évariste Galois and Niels Abel). The continuum hypothesis is related.
I still can't figure out the meaning of sizeof(long double).
It is the implementation specific ratio of the number of bytes (or octets) in a long double automatic variable vs the number of bytes in a char automatic variable.
The C11 standard (read n1570 and see this reference) does allow an implementation to have sizeof(long double) being, like sizeof(char), equal to 1. I cannot name such an implementation, but it might be (theoretically) the case on some weird computer architectures (e.g. some DSP).
Could somebody tell me why the output of us are different?
What make you think they could be equal?
Practically speaking, floating point numbers are often IEEE754. But on IBM mainframes (e.g. z/Series) or on VAXes they are not.
float var1 =3.1415926535897932;
double var2 =3.1415926535897932;
Be aware that it could be possible to have a C implementation where (double)var1 != var2 or where var1 != (float)var2 after executing these above instructions.
If you need more precision that what long double achieve on your particular C implementation (e.g. your recent GCC compiler, which could be a cross-compiler), consider using some arbitrary precision arithmetic library such as GMPlib.
I recommend carefully reading the documentation of printf(3), and of every other function that you are using from your C standard library. I also suggest to read the documentation of your C compiler.
You might be interested by static program analysis tools such as Frama-C or the Clang static analyzer. Read also this draft report.
If your C compiler is a recent GCC, compile with all warnings and debug info, so gcc -Wall -Wextra -g and learn how to use the GDB debugger.
Could somebody tell me why the output of us are different?
C allows different compilers/implementations to use different floating point encoding and handle evaluations in slightly different ways.
Precision
The difference in sizeof hint that the 2 implementations may employ different precision. Yet the difference could be due to padding, In this case, extra bytes added to preserve an alignment for performance reasons.
A better precision assessment is to print epsilon: the difference between 1.0 and the next larger value of the type.
#include <float.h>
printf("%e %e %Le\n", FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON);
Sample result
1.192093e-07 2.220446e-16 1.084202e-19
FLT_EVAL_METHOD
When this is 0, floating point types evaluate to that type. With other values like 2, floating point evaluate using wider types and only in the end save the result to the target type.
printf("FLT_EVAL_METHOD %d\n", FLT_EVAL_METHOD);
Two of several possible values indicated below:
FLT_EVAL_METHOD
0 evaluate all operations and constants just to the range and precision of the type;
2 evaluate all operations and constants to the range and precision of the long double type.
Notice the constants 3.1415926535897932, 3.141592653589793213456 are both normally double constants. Neither has an L suffix that would make the long double. Both have the same double value of 3.1415926535897931... and val2, val3 should get the same value. Yet with FLT_EVAL_METHOD==2, constants can evaluated as a long double and that is certainly what happened in "the output from another person" code.
Print FLT_EVAL_METHOD to see that difference.
I'm changing an uint32_t to a float but without changing the actual bits.
Just to be sure: I don't wan't to cast it. So float f = (float) i is the exact opposite of what I wan't to do because it changes bits.
I'm going to use this to convert my (pseudo) random numbers to float without doing unneeded math.
What I'm currently doing and what is already working is this:
float random_float( uint64_t seed ) {
// Generate random and change bit format to ieee
uint32_t asInt = (random_int( seed ) & 0x7FFFFF) | (0x7E000000>>1);
// Make it a float
return *(float*)(void*)&asInt; // <-- pretty ugly and nees a variable
}
The Question: Now I'd like to get rid of the asInt variable and I'd like to know if there is a better / not so ugly way then getting the address of this variable, casting it twice and dereferencing it again?
You could try union - as long as you make sure the types are identical in memory sizes:
union convertor {
int asInt;
float asFloat;
};
Then you can assign your int to asFloat (or the other way around if you want to). I use it a lot when I need to do bitwise operations on one hand and still get a uint32_t representation on the number on the other hand
[EDIT]
Like many of the commentators rightfully state, you must take into consideration values that are not presentable by integers like like NAN, +INF, -INF, +0, -0.
So you seem to want to generate floating point numbers between 0.5 and 1.0 judging from your code.
Assuming that your microcontroller has a standard C library with floating point support, you can do this all standards compliant without actually involving any floating point operations, all you need is the ldexp function that itself doesn't actually do any floating point math.
This would look something like this:
return ldexpf((1 << 23) + random_thing_smaller_than_23_bits(), -24);
The trick here is that we happen to know that IEEE754 binary32 floating point numbers have integer precision between 2^23 and 2^24 (I could be off-by-one here, double check please, I'm translating this from some work I've done on doubles). So the compiler should know how to convert that number to a float trivially. Then ldexp multiplies that number by 2^-24 by just changing the bits in the exponent. No actual floating point operations involved and no undefined behavior, the code is fully portable to any standard C implementation with IEEE754 numbers. Double check the generated code, but a good compiler and c library should not use any floating point instructions here.
If you want to peek at some experiments I've done around generating random floating point numbers you can peek at this github repo. It's all about doubles, but should be trivially translatable to floats.
Reinterpreting the binary representation of an int to a float would result in major problems:
There are a lot of undefined codes in the binary representation of a float.
Other codes represent special conditions, like NAN, +INF, -INF, +0, -0 (sic!), etc.
Also, if that is a random value, even if catching all non-value representations, that would yield a very bad random distribution.
If you are working on an MCU without FPU, you should better think about avoiding float at all. An alternative might be fraction or scaled integers. There are many implementations of algorithms which use float, but can be easily converted to fixed point types with acceptable loss of precision (or even none at all). Some might even yield more precision than float (note that single precision float has only 23 bits of mantissa, an int32 would have 31 bits (+ 1 sign for either), same for a fractional or fixed scaled int.
Note that C11 added (optional) support for _Frac. You might want to research on that.
Edit:
According you your comments, you seem to convert the int to a float in range 0..<1. For that, you can assemble the float using bit operations on an uint32_t (e.g. the original value). You just need to follow the IEEE format (presumed your toolchain does comply to the C standard! See wikipedia.
The result (still uint32_t) can then be reinterpreted by a union or pointer as described by others already. Pack that in a system-dependent, well-commented library and dig it deep. Do not forget to check about endianess and alignment (likely both the same for float and uint32_t, but important for the bit-ops).
I am trying to understand the maximum value that I can store in C. I tried doing printf("%f", pow(2, x)). The answer holds good until x = 1023. It says Inf when x = 1024.
I am sorry that it is a basic question but I am trying to understand how C assigns datatypes' sizes based on my machine.
I have a Mac (64-bit processor). A clear understanding that I have is that my processor being a 64-bit one, it will be able to do calculations up to the value (264). Clearly pow(2, 1023) is greater than that. But my program is working fine till x = 1023. How is this possible? Is GNU compiler has something to do with this?
If this is a duplicate of other question kindly give the link.
In C the pow() functions returns a double, and the double type is typically a 64-bit IEEE format representation of a floating point number.
The basic idea of floating point is to express a number in the same general way as e.g. 1.234×1056. Here you have a mantissa 1.234 and an exponent 56. C++, and probably also C, allows decimal representation for floating point numbers (but not for integer types), but in practice the internal representation will be binary, with a power of 2 rather than a power of 10.
The limit you ran up against was the supported range for the exponent in your compiler's representation of double numbers; probably 64-bit IEEE 754.
The limits of the various built-in integral numerical types are available as symbolic constants from <limits.h>. The limits of the built-in floating point types are available as symbolic constants from <float.h>. See the table over at cppreference.com for more details.
In C++ these limits are also available via the numeric_limits class template from <limits>.
"64-bit processor" typically means that it can deal with integers that contain at most 64 bits at a time (i.e. in a single instruction), not that it can only process numbers with 64 binary digits or less. Using arbitrary precision arithmetic you can do calculations on numbers that are arbitrarily large, provided that you have enough memory (and time), just like how us humans can do operations on big values with only 10 fingers. Read more here: What is the biggest number you can generate using a 64-bit processor?
However pow(2, 1023) is a little bit different. It's not an integer but a floating-point number (of type double in C) represented by a sign, a mantissa and an exponent like this (-1)sign × 1 × 21023. Not all the digits are stored so it's only accurate to the first few digits. However most systems use binary floating-point types so they can store the precise value of a power of 2 up to a large exponent depending on the exponent range. Most modern systems' floating-point types conform to IEEE-754 standard with double maps to binary64/double precision, therefore the maximum value will be
21023 × (1 + (1 − 2−52)) ≈ 1.7976931348623157 × 10308
The maximum value for a double is DBL_MAX. This is defined by <float.h> in C, or <cfloat> in C++. The numeric value may vary across systems, but you can always refer to it by the macro DBL_MAX.
You can print this:
printf("%f\n", DBL_MAX);
The integer data types all have similar macros defined in <limits.h>: e.g. ULLONG_MAX is the biggest value for unsigned long long. If printing with printf make sure to use the correct format specifier.
I'm trying to write unit tests for some simple vector math functions that operate on arrays of single precision floating point numbers. The functions use SSE intrinsics and I'm getting false positives (at least I think) when running the tests on a 32-bit system (the tests pass on 64-bit). As the operation runs through the array, I accumulate more and more round off error. Here is a snippet of unit test code and output (my actual question(s) follow):
Test Setup:
static const int N = 1024;
static const float MSCALAR = 42.42f;
static void setup(void) {
input = _mm_malloc(sizeof(*input) * N, 16);
ainput = _mm_malloc(sizeof(*ainput) * N, 16);
output = _mm_malloc(sizeof(*output) * N, 16);
expected = _mm_malloc(sizeof(*expected) * N, 16);
memset(output, 0, sizeof(*output) * N);
for (int i = 0; i < N; i++) {
input[i] = i * 0.4f;
ainput[i] = i * 2.1f;
expected[i] = (input[i] * MSCALAR) + ainput[i];
}
}
My main test code then calls the function to be tested (which does the same calculation used to generate the expected array) and checks its output against the expected array generated above. The check is for closeness (within 0.0001) not equality.
Sample output:
0.000000 0.000000 delta: 0.000000
44.419998 44.419998 delta: 0.000000
...snip 100 or so lines...
2043.319946 2043.319946 delta: 0.000000
2087.739746 2087.739990 delta: 0.000244
...snip 100 or so lines...
4086.639893 4086.639893 delta: 0.000000
4131.059570 4131.060059 delta: 0.000488
4175.479492 4175.479980 delta: 0.000488
...etc, etc...
I know I have two problems:
On 32-bit machines, differences between 387 and SSE floating point arithmetic units. I believe 387 uses more bits for intermediate values.
Non-exact representation of my 42.42 value that I'm using to generate expected values.
So my question is, what is the proper way to write meaningful and portable unit tests for math operations on floating point data?
*By portable I mean should pass on both 32 and 64 bit architectures.
Per a comment, we see that the function being tested is essentially:
for (int i = 0; i < N; ++i)
D[i] = A[i] * b + C[i];
where A[i], b, C[i], and D[i] all have type float. When referring to the data of a single iteration, I will use a, c, and d for A[i], C[i], and D[i].
Below is an analysis of what we could use for an error tolerance when testing this function. First, though, I want to point out that we can design the test so that there is no error. We can choose the values of A[i], b, C[i], and D[i] so that all the results, both final and intermediate results, are exactly representable and there is no rounding error. Obviously, this will not test the floating-point arithmetic, but that is not the goal. The goal is to test the code of the function: Does it execute instructions that compute the desired function? Simply choosing values that would reveal any failures to use the right data, to add, to multiply, or to store to the right location will suffice to reveal bugs in the function. We trust that the hardware performs floating-point correctly and are not testing that; we just want to test that the function was written correctly. To accomplish this, we could, for example, set b to a power of two, A[i] to various small integers, and C[i] to various small integers multiplied by b. I could detail limits on these values more precisely if desired. Then all results would be exact, and any need to allow for a tolerance in comparison would vanish.
That aside, let us proceed to error analysis.
The goal is to find bugs in the implementation of the function. To do this, we can ignore small errors in the floating-point arithmetic, because the kinds of bugs we are seeking almost always cause large errors: The wrong operation is used, the wrong data is used, or the result is not stored in the desired location, so the actual result is almost always very different from the expected result.
Now the question is how much error should we tolerate? Because bugs will generally cause large errors, we can set the tolerance quite high. However, in floating-point, “high” is still relative; an error of one million is small compared to values in the trillions, but it is too high to discover errors when the input values are in the ones. So we ought to do at least some analysis to decide the level.
The function being tested will use SSE intrinsics. This means it will, for each i in the loop above, either perform a floating-point multiply and a floating-point add or will perform a fused floating-point multiply-add. The potential errors in the latter are a subset of the former, so I will use the former. The floating-point operations for a*b+c do some rounding so that they calculate a result that is approximately a•b+c (interpreted as an exact mathematical expression, not floating-point). We can write the exact value calculated as (a•b•(1+e0)+c)•(1+e1) for some errors e0 and e1 with magnitudes at most 2-24, provided all the values are in the normal range of the floating-point format. (2-24 is the maximum relative error that can occur in any correctly rounded elementary floating-point operation in round-to-nearest mode in the IEEE-754 32-bit binary floating-point format. Rounding in round-to-nearest mode changes the mathematical value by at most half the value of the least significant bit in the significand, which is 23 bits below the most significant bit.)
Next, we consider what value the test program produces for its expected value. It uses the C code d = a*b + c;. (I have converted the long names in the question to shorter names.) Ideally, this would also calculate a multiply and an add in IEEE-754 32-bit binary floating-point. If it did, then the result would be identical to the function being tested, and there would be no need to allow for any tolerance in comparison. However, the C standard allows implementations some flexibility in performing floating-point arithmetic, and there are non-conforming implementations that take more liberties than the standard allows.
A common behavior is for an expression to be computed with more precision than its nominal type. Some compilers may calculate a*b + c using double or long double arithmetic. The C standard requires that results be converted to the nominal type in casts or assignments; extra precision must be discarded. If the C implementation is using extra precision, then the calculation proceeds: a*b is calculated with extra precision, yielding exactly a•b, because double and long double have enough precision to exactly represent the product of any two float values. A C implementation might then round this result to float. This is unlikely, but I allow for it anyway. However, I also dismiss it because it moves the expected result to be closer to the result of the function being tested, and we just need to know the maximum error that can occur. So I will continue, with the worse (more distant) case, that the result so far is a•b. Then c is added, yielding (a•b+c)•(1+e2) for some e2 with magnitude at most 2-53 (the maximum relative error of normal numbers in the 64-bit binary format). Finally, this value is converted to float for assignment to d, yielding (a•b+c)•(1+e2)•(1+e3) for some e3 with magnitude at most 2-24.
Now we have expressions for the exact result computed by a correctly operating function, (a•b•(1+e0)+c)•(1+e1), and for the exact result computed by the test code, (a•b+c)•(1+e2)•(1+e3), and we can calculate a bound on how much they can differ. Simple algebra tells us the exact difference is a•b•(e0+e1+e0•e1-e2-e3-e2•e3)+c•(e1-e2-e3-e2•e3). This is a simple function of e0, e1, e2, and e3, and we can see its extremes occur at endpoints of the potential values for e0, e1, e2, and e3. There are some complications due to interactions between possibilities for the signs of the values, but we can simply allow some extra error for the worst case. A bound on the maximum magnitude of the difference is |a•b|•(3•2-24+2-53+2-48)+|c|•(2•2-24+2-53+2-77).
Because we have plenty of room, we can simplify that, as long as we do it in the direction of making the values larger. E.g., it might be convenient to use |a•b|•3.001•2-24+|c|•2.001•2-24. This expression should suffice to allow for rounding in floating-point calculations while detecting nearly all implementation errors.
Note that the expression is not proportional to the final value, a*b+c, as calculated either by the function being tested or by the test program. This means that, in general, tests using a tolerance relative to the final values calculated by the function being tested or by the test program are wrong. The proper form of a test should be something like this:
double tolerance = fabs(input[i] * MSCALAR) * 0x3.001p-24 + fabs(ainput[i]) * 0x2.001p-24;
double difference = fabs(output[i] - expected[i]);
if (! (difference < tolerance))
// Report error here.
In summary, this gives us a tolerance that is larger than any possible differences due to floating-point rounding, so it should never give us a false positive (report the test function is broken when it is not). However, it is very small compared to the errors caused by the bugs we want to detect, so it should rarely give us a false negative (fail to report an actual bug).
(Note that there are also rounding errors computing the tolerance, but they are smaller than the slop I have allowed for in using .001 in the coefficients, so we can ignore them.)
(Also note that ! (difference < tolerance) is not equivalent to difference >= tolerance. If the function produces a NaN, due to a bug, any comparison yields false: both difference < tolerance and difference >= tolerance yield false, but ! (difference < tolerance) yields true.)
On 32-bit machines, differences between 387 and SSE floating point arithmetic units. I believe 387 uses more bits for intermediate values.
If you are using GCC as 32-bit compiler, you can tell it to generate SSE2 code still with options -msse2 -mfpmath=sse. Clang can be told to do the same thing with one of the two options and ignores the other one (I forget which). In both cases the binary program should implement strict IEEE 754 semantics, and compute the same result as a 64-bit program that also uses SSE2 instructions to implement strict IEEE 754 semantics.
Non-exact representation of my 42.42 value that I'm using to generate expected values.
The C standard says that a literal such as 42.42f must be converted to either the floating-point number immediately above or immediately below the number represented in decimal. Moreover, if the literal is representable exactly as a floating-point number of the intended format, then this value must be used. However, a quality compiler (such as GCC) will give you(*) the nearest representable floating-point number, of which there is only one, so again, this is not a real portability issue as long as you are using a quality compiler (or at the very least, the same compiler).
Should this turn out to be a problem, a solution is to write an exact representation of the constants you intend. Such an exact representation can be very long in decimal format (up to 750 decimal digits for the exact representation of a double) but is always quite compact in C99's hexadecimal format: 0x1.535c28p+5 for the exact representation of the float nearest to 42.42. A recent version of the static analysis platform for C programs Frama-C can provide the hexadecimal representation of all inexact decimal floating-point constants with option -warn-decimal-float:all.
(*) barring a few conversion bugs in older GCC versions. See Rick Regan's blog for details.
Ignoring why I would want to do this, the 754 IEEE fp standard doesn't define the behavior for the following:
float h = NAN;
printf("%x %d\n", (int)h, (int)h);
Gives: 80000000 -2147483648
Basically, regardless of what value of NAN I give, it outputs 80000000 (hex) or -2147483648 (dec). Is there a reason for this and/or is this correct behavior? If so, how come?
The way I'm giving it different values of NaN are here:
How can I manually set the bit value of a float that equates to NaN?
So basically, are there cases where the payload of the NaN affects the output of the cast?
Thanks!
The result of a cast of a floating point number to an integer is undefined/unspecified for values not in the range of the integer variable (±1 for truncation).
Clause 6.3.1.4:
When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.
If the implementation defines __STDC_IEC_559__, then for conversions from a floating-point type to an integer type other than _BOOL:
if the floating value is infinite or NaN or if the integral part of the floating value exceeds the range of the integer type, then the "invalid" floating-
point exception is raised and the resulting value is unspecified.
(Annex F [normative], point 4.)
If the implementation doesn't define __STDC_IEC_559__, then all bets are off.
There is a reason for this behavior, but it is not something you should usually rely on.
As you note, IEEE-754 does not specify what happens when you convert a floating-point NaN to an integer, except that it should raise an invalid operation exception, which your compiler probably ignores. The C standard says the behavior is undefined, which means not only do you not know what integer result you will get, you do not know what your program will do at all; the standard allows the program to abort or get crazy results or do anything. You probably executed this program on an Intel processor, and your compiler probably did the conversion using one of the built-in instructions. Intel specifies instruction behavior very carefully, and the behavior for converting a floating-point NaN to a 32-bit integer is to return 0x80000000, regardless of the payload of the NaN, which is what you observed.
Because Intel specifies the instruction behavior, you can rely on it if you know the instruction used. However, since the compiler does not provide such guarantees to you, you cannot rely on this instruction being used.
First, a NAN is everything not considered a float number according to the IEEE standard.
So it can be several things. In the compiler I work with there is NAN and -NAN, so it's not about only one value.
Second, every compiler has its isnan set of functions to test for this case, so the programmer doesn't have to deal with the bits himself. To summarize, I don't think peeking at the value makes any difference. You might peek the value to see its IEEE construction, like sign, mantissa and exponent, but, again, each compiler gives its own functions (or better say, library) to deal with it.
I do have more to say about your testing, however.
float h = NAN;
printf("%x %d\n", (int)h, (int)h);
The casting you did trucates the float for converting it to an int. If you want to get the
integer represented by the float, do the following
printf("%x %d\n", *(int *)&h, *(int *)&h);
That is, you take the address of the float, then refer to it as a pointer to int, and eventually take the int value. This way the bit representation is preserved.