-NaN in printf in C - c

I am currently experiencing issues with a Raytracer "Engine" in some calculations.
info->eyex = -1000.0;
info->eyey = 0.0;
printf("%f et %f et %f et %f et %f\n", info->eyex, info->vx, info->eyey, info->vy, info->vz);
For example, in that piece of code, values seems good, but info->eyex gives me a -nan error.
It's weird, because I reset the value before.

My psychic sense tells me that eyex is declared as an int, not as a double as it should be. When you assign -1000.0 to it, it gets truncated to the integer -1000 (your compiler should give you a warning here), which is represented in binary as 0xFFFFFC18 using two's complement notation. Likewise, assuming that eye is also an integer, its value of 0 is represented in binary as 0x00000000.
When you pass eyex, eyey, and the other parameters to printf, they get pushed on the stack so that they lie in memory with increasing addresses. So immediately before the call instruction to call the subroutine, the stack frame looks something like this:
<top of stack>
0xFFFFFC18 ; eyex
(4-8 bytes) ; vx
0x00000000 ; eyey
(4-8 bytes) ; vy
(4-8 bytes) ; vz
When printf sees the %f format specifier, that says "take 8 bytes off of the stack, interpret them as a double value, and print out that double value". So it sees the value 0xFFFFFC18xxxxxxxx, where the xxxxxxxxx is the value of info->vx. Regardless of that value, this is the IEEE 754 representation of NaN, or "not a number". It has the sign bit set, so some implementations may choose to interpret this as "negative NaN", though this has the same semantics as regular NaN.
Your compiler should also be warning you here that you're passing the wrong types of arguments to printf—it's expecting a double but you're not passing it that. GCC enables these warnings with -Wall, which I highly recommend enabling.
So, the solution is to declare eyex to be of type double (and presumably the other variables to also be double, if they're not already). Alternatively, if you don't control the definition of eyex et al (say, because they're part of a structure for a third-party library), then what you should instead be doing is printing them out with the %d modifier to print them as integers, not with %f, and you should also assign them integer values such as -1000 and 0, not floating-point values such as -1000.0 and 0.0.

Just to confirm this. I don't know exactly what triggers that behavior, though. printf is optimized at compile time, and the format string analyzed. Probably it's (wrongly) assuming something about your variable. Even though %f should work for doubles and floats, it seems it doesn't always (at least with gcc 4.4.5, which is the one I'm using)
Try assigning the value to another variable and then passing that to printf. Although ugly, that solved the problem for me.

Related

I'm curious about the c language printf format output value

I have a question while studying C language.
printf("%d, %f \n", 120, 125.12);
The output value in this code is:
120, 125.120000
That's it. And
printf("%d, %f \n", 120.1, 125.12);
The output value of this code is:
1717986918, 0.000000
That's it.
Of course, I understand that the output value of %d in front of us was a strange value because I put a mistake in the integer format character. But I don't know why %f is 0.000000 after that. Does the character on the back have an effect on the value of the previous character?
I'm asking because I'm curious because it keeps coming out like this even if I put in a different price.
printf is a variadic function, meaning its argument types and counts are variable. As such, they must be extracted according to their type. Extraction of a given argument is dependent on the proper extraction of the arguments that precede it. Different argument types have different sizes, alignments, etc.
If you use an incorrect format specifier for a printf argument, then it throws printf out of sync. The arguments after the one that was incorrectly extracted will not, in general, be extracted correctly.
In your example, it probably extracted only a portion of the first double argument that was passed, since it was expecting an int argument. After that it was out of sync and improperly extracted the second double argument. This is speculative though, and varies from one architecture to another. The behavior is undefined.
why %f is 0.000000 after that.
According to the C standard because %d expects an int and you passed a double, undefined behavior happens and anything is allowed to happen.
Anyway:
I used http://www.binaryconvert.com/convert_double.html to convert doubles to bits:
120.1 === 0x405E066666666666
125.12 === 0x405F47AE147AE148
I guess that your platform is x86-ish, little endian, with ILP32 data model.
The %d printf format specifier is for int and sizeof(int) is 4 bytes or 32 bits in LP64. Because the architecture is little endian and the stack too, the first 4 bytes from the stack are 0x66666666 and the decimal value of it is printed out which is 1717986918.
On the stack are left three 32-bit little endian words, let's split them 0x405E0666 0x147AE148 0x405F47AE.
The %f printf format specifier is for double and sizeof(double) on your platform is 8 or 64 bits. We already read 4 bytes from the stack, and now we read 8 bytes. Remembering about the endianess, we read 0x147AE148405E0666 that is converted to double. Again I used this site and converted the hex to a double, it resulted in 5.11013578783948654276686949883E-210. E-210 - this is a very, very small number. Because %f by default prints with 6 digits of precision, the initial 7 digits of the number are 0, so only zeros are printed. If you would use %g printf format specifier, then the number would be printed as 5.11014e-210.
This can be attributed to the fact that the printf function is not type-safe, i.e. there is no connection between the specifiers in the format string and the passed arguments.
The function converts all integer arguments to int (4 bytes) and all floats to double (8 bytes). As the types are guessed from the format string, a shift in the data occurs. It is likely that the 0.000000 appears because what is loaded corresponds to a floating-point zero (biaised exponent = 0).
Try with 1.0, 1.e34.

Why does an integer interpreted as a double render zero?

printf("%f", 20); results in the output 0.000000, not 20.000000. I'm guessing this has to do with how an int is represented in memory and how a double is represented in memory. Surprisingly for me, no matter how I alter 20, for example by making the number larger, the output is still 0.000000. Could someone please explain the underlying mechanics of this?
Most probably you are compiling your code on a platform/ABI where even for varargs functions data is passed into registers, and in particular different registers for integer/floating point values. x86_64 on Linux/OS X behaves like that.
The caller has an integer to pass, so it puts it into rsi; on the other side, printf expects a floating point value, so it tries to read it from xmm0. No matter how you change your integer argument to any other value printf will be unaffected - if will just print whatever happens to stay into xmm0 at the moment of the call.
You can actually check if this is the case by changing your call to:
printf("%f", 20, 123.45);
if it's working as I described, you should see 123.45 printed (the caller here puts 123.45 into xmm0, as it is the first floating point parameter passed to the function; printf behaves as before, but this time finds another value into xmm0).
First of all, this is undefined behavior. %f expects an argument of type float/double. Passing an int makes incompatible type and hence it invokes UB.
Quoting C11, chapter §7.21.6.1, fprintf()
f,F
A double argument representing a floating-point number [...]
a float is also allowed, as due to default argument promotion rule, it will get promoted to a double which is the expected type there, so either a double or a float is acceptable, but not an int.
...and that's all. UB, is, well, UB. You cannot try to justify anything with a code producing UB.
That said, with proper warning levels enabled, the code should not compile, at all. Though, if you choose to make the code compile and produce a binary/assembly code you can see different code generated for different platforms. One of such cases is explained in the other answer by Matteo Italia, considering x86_64 arch on Linux/OS X.
The problem is your compiler is assuming the 20 is an int. Your options are to either declare a float variable and input it here OR add a typecast.
e.g.
printf("%f", (double)20);

Different rounding between assignment and printf

I have a program with two variables of type int.
int num;
int other_num;
/* change the values of num and other_num with conditional increments */
printf ("result = %f\n", (float) num / other_num);
float r = (float) num / other_num;
printf ("result = %f\n", r);
The value written in the first printf is different from the value written by the second printf (by 0.000001, when printed with 6 decimal places).
Before the division, the values are:
num = 10201
other_num = 2282
I've printed the resulting numbers to 15 decimal places. Those numbers diverge in the 7th decimal place which explains the difference in the 6th one.
Here are the numbers with 15 decimal places:
4.470201577563540
4.470201492309570
I'm aware of floating point rounding issues, but I was expecting the calculated result to be the same when performed in the assignment and in the printf argument.
Why is this expectation incorrect?
Thanks.
Probably because FLT_EVAL_METHOD is something other than 0 on your system.
In the first case, the expression (float) num / other_num has nominal type float, but is possibly evaluated at higher precision (if you're on x86, probably long double). It's then converted to double for passing to printf.
In the second case, you assign the result to a variable of type float, which forces dropping of excess precision. The float is then promoted to double when passed to printf.
Of course without actual numbers, this is all just guesswork. If you want a more definitive answer, provide complete details on your problem.
The point is the actual position of the result of the expressions during the execution of the program. C values can live on the memory (which includes caches) or just on registers if the compiler decides that this kind of optimization is possible in the specific case.
In the first printf, the expression result is stored in a register, as the value is just used in the same C instruction, so the compiler thinks (correctly), that it would be useless to store it somewhere less volatile; as result, the value is stored as double or long double depending on the architecture.
In the second case, the compiler did not perform such optimization: the value is stored in a variable within the stack, which is memory, not register; the same value is therefore chopped at the 23th significant bit.
More examples are provided by streflop and its documentation.

A variable is declared as float f = 5.2 but while printing %d is used, I didn't get the o/p as 5 its printing some garbage value

/* Compiled uisng GCC Compiler in CentOs 5 */
#include <stdio.h>
int main(void)
{
float f = 5.2;
printf("f = %d\n",f);
return 0;
}
/* O/p is not 5 its printing some garbage value */
Why is the output not 5? What is the in-memory representation of float values?
"%d" prints a decimal integer. This means printf is interpreting what gets passed as an integer, not a float. It's not doing any smart conversion and I'm pretty sure this is undefined behaviour.
The in memory representation of a float is implementation specific. Most implementations use IEEE 754, but this is not guaranteed at all.
For the record using "-Wall -Wextra" with gcc would have picked this mistake up as a warning. If you want to print it as an integer you must cast it too:
printf("f = %d\n",(int)f);
Your code is not printing 5 because you're not giving it 5. You're giving it 5.2. 5 is an integer value and 5.2 is a floating point value. The first is typically encoded using 2s-complement while the second is typically encoded using IEEE floating point values. (There are other encodings possible and even occasionally in use, but the two you're most likely to encounter are the two I've mentioned.)
If you're telling the computer that you're giving it an integer (%d) and then you proceed to give it a floating point value (5.2) getting garbage is what you expect. It's taking the bits of IEEE floating point representation and reading them as if it were an integer. (It's the old formula: garbage in, garbage out.) If you try not lying to the computer you'll get better results.
The code you want to use in your printf call is %f instead of %d. Using it means you're no longer lying to the computer about the type of the data being passed in. That being said, to head off your inevitable next question, be sure to read this explanation of floating point so you understand why your floating point numbers aren't what you think they are.

Does printf() depend on order of format specifiers?

#include<stdio.h>
main()
{
float x=2;
float y=4;
printf("\n%d\n%f",x/y,x/y);
printf("\n%f\n%d",x/y,x/y);
}
Output:
0
0.000000
0.500000
0
compiled with gcc 4.4.3
The program exited with error code 12
As noted in other answers, this is because of the mismatch between the format string and the type of the argument.
I'll guess that you're using x86 here (based on the observed results).
The arguments are passed on the stack, and x/y, although of type float, will be passed as a double to a varargs function (due to type "promotion" rules).
An int is a 32-bit value, and a double is a 64-bit value.
In both cases you are passing x/y (= 0.5) twice. The representation of this value, as a 64-bit double, is 0x3fe0000000000000. As a pair of 32-bit words, it's stored as 0x00000000 (least significant 32 bits) followed by 0x3fe00000 (most significant 32-bits). So the arguments on the stack, as seen by printf(), look like this:
0x3fe00000
0x00000000
0x3fe00000
0x00000000 <-- stack pointer
In the first of your two cases, the %d causes the first 32-bit value, 0x00000000, to be popped and printed. The %f pops the next two 32-bit values, 0x3fe00000 (least significant 32 bits of 64 bit double), followed by 0x00000000 (most significant). The resulting 64-bit value of 0x000000003fe00000, interpreted as a double, is a very small number. (If you change the %f in the format string to %g you'll see that it's almost 0, but not quite).
In the second case, the %f correctly pops the first double, and the %d pops the 0x00000000 half of the second double, so it appears to work.
When you say %d in the printf format string, you must pass an int value as the corresponding argument. Otherwise the behavior is undefined, meaning that your computer may crash or aliens might knock at your door. Similar for %f and double.
Yes. Arguments are read from the vararg list to printf in the same order that format specifiers are read.
Both printf statements are invalid because you're using a format specifier expecting a int, but you're only giving it a floatdouble.
What you are doing is undefiend behaviour. What you are seeing is coincidental; printf could write anything.
You must match the exact type when giving printf arguments. You can e.g. cast:
printf("\n%d\n%f", (int)(x/y), x/y);
printf("\n%f\n%d", x/y, (int)(x/y));
This result is not surprising, in the first %d you passed a double where an integer was expected.
http://en.wikipedia.org/wiki/Format_string_attack
Something related to my question. Supports the answer of Matthew.

Resources