Why does printf print wrong values? - c

Why do I get the wrong values when I print an int using printf("%f\n", myNumber)?
I don't understand why it prints fine with %d, but not with %f. Shouldn't it just add extra zeros?
int a = 1;
int b = 10;
int c = 100;
int d = 1000;
int e = 10000;
printf("%d %d %d %d %d\n", a, b, c, d, e); //prints fine
printf("%f %f %f %f %f\n", a, b, c, d, e); //prints weird stuff

well of course it prints the "weird" stuff. You are passing in ints, but telling printf you passed in floats. Since these two data types have different and incompatible internal representations, you will get "gibberish".
There is no "automatic cast" when you pass variables to a variandic function like printf, the values are passed into the function as the datatype they actually are (or upgraded to a larger compatible type in some cases).
What you have done is somewhat similar to this:
union {
int n;
float f;
} x;
x.n = 10;
printf("%f\n", x.f); /* pass in the binary representation for 10,
but treat that same bit pattern as a float,
even though they are incompatible */

If you want to print them as floats, you can cast them as float before passing them to the printf function.
printf("%f %f %f %f %f\n", (float)a, (float)b, (float)c, (float)d, (float)e);

a, b, c, d and e aren't floats. printf() is interpreting them as floats, and this would print weird stuff to your screen.

Using incorrect format specifier in printf() invokes Undefined Behaviour
For example:
int n=1;
printf("%f", n); //UB
float x=1.2f;
printf("%d", x); //UB
double y=12.34;
printf("%lf",y); //UB
Note: format specifier for double in printf() is %f.

the problem is... inside printf. the following happens
if ("%f") {
float *p = (float*) &a;
output *p; //err because binary representation is different for float and int
}

the way printf and variable arguments work is that the format specifier in the string e.g. "%f %f" tells the printf the type and thus the size of the argument. By specifying the wrong type for the argument it gets confused.
look at stdarg.h for the macros used to handle variable arguments

For "normal" (non variadac functions with all the types specified) the compiler converts integer valued types to floating point types where needed.
That does not happen with variadac arguments, which are always passed "as is".

Related

difference between %fl and %lf in C

I am currently learning about C language data type and I try to print a double variable the compiler suggested me to use fl after I type '%', and I got 1 number in the end of precision decimal line. Compare to %lf it will print six precision decimals in total
double num=12322;
printf("%lf",num);// result is 12322.000000
printf("%fl",num);// result is 12322.0000001
I've searched plenty of place but mostly the difference between %f and %lf is frequently asked. Is my situation could possiply the same?
In this call of printf
printf("%lf",num);// result is 12322.000000
the length modifier l in the conversion specifier %lf has no effect.
From the C Standard (7.21.6.1 The fprintf function)
7 The length modifiers and their meanings are:
l (ell) Specifies that a following d, i, o, u, x, or X conversion
specifier applies to a long int or unsigned long int argument; that a
following n conversion specifier applies to a pointer to a long int
argument; that a following c conversion specifier applies to a wint_t
argument; that a following s conversion specifier applies to a pointer
to a wchar_t argument; or has no effect on a following a, A, e, E,
f, F, g, or G conversion specifier.
In this call of printf
printf("%fl",num);// result is 12322.0000001
where in the comment there shall be written the letter 'l' instead of the number 1 as you think
// result is 12322.000000l
^^^
the format string "%fl" means that after outputting an object of the type double due to the conversion specification %f there will be outputted the letter 'l'.
Pay attention to that with the conversion specifier f there can be used one more letter 'l' that is the upper case letter 'L'. In this case the conversion specification %Lf serves to output objects of the type long double.
I think you actually have a typo in your output....
double num=12322;
printf("%lf",num);// result is 12322.000000
printf("%fl",num);// result is 12322.0000001
is actually
double num=12322;
printf("%lf",num);// result is 12322.000000
printf("%fl",num);// result is 12322.000000l
The C standard says that the float is converted to a double when passed to a variadic function, so %lf and %f are equivalent; %fl is the same a %f... with an l after it.
There are two correct ways of printing a value of type double:
printf("%f", num);
or
printf("%lf", num);
These two have exactly the same effect. In this case, the "l" modifier is effectively ignored.
The reason they have the same effect is that printf is special. printf accepts a variable number of arguments, and for such functions, C always applies the default argument promotions. This means that all integer types smaller than int are promoted to int, and float is promoted to double. So if you write
float f = 1.5;
printf("%f\n", f);
then f is promoted to double before being passed. So inside printf, it always gets a double. So %f will never see a float, always a double. So %f is written to expect a double, so it ends up working for both float and double. So you don't need a l modifier to say which. But that's kind of confusing, so the Standard says you can put the l there if you want to — but you don't have to, and it doesn't do anything.
(This is all very different, by the way, from scanf, where %f and %lf are totally different, and must be explicitly matched to arguments of type float * versus double *.)
I have no idea why your IDE complained about (put a red line under) %lf, and I have no idea what it meant by suggesting, as you said,
fl, lg, l, f,
elf, Alf, ls, sf,
if, la, lo, of
Some of those look they might be nonstandard, system-specific extensions, but some (especially fl) are nonsense. So, bottom line, it sounds like your IDE's suggestion was confusing, unnecessary, and quite possibly wrong.

The problem about printf function to "output float with %d" in C

I am a newbie to the C language. When I was learning floating point numbers today, I found the following problems.
float TEST= 3.0f;
printf("%x\n",TEST);
printf("%d\n",TEST);
first output:
9c9e82a0
-1667333472
second output:
61ea32a0
1642738336
As shown above, each execution will output different results. I have checked a lot of IEEE 754 format and still don't understand the reasons. I would like to ask if anyone can explain or provide keywords for me to study, thank you.
-----------------------------------Edit-----------------------------------
Thank you for your replies. I know how to print IEEE 754 bit pattern. However, as Nate Eldredge, chux-Reinstate Monica said, using %x and %d in printf is undefined behavior. If there is no floating point register in our device, how does it work ? Is this described in the C99 specification?
Most of the time, when you call a function with the "wrong kind" (wrong type) of argument, an automatic conversion happens. For example, if you write
#include <stdio.h>
#include <math.h>
printf("%f\n", sqrt(144));
this works just fine. The compiler knows (from the function prototype in <math.h>) that the sqrt function expects an argument of type double. You passed it the int value 144, so the compiler automatically converted that int to double before passing it to sqrt.
But this is not true for the printf function. printf accepts arguments of many different types, and as long as each argument is right for the particular % format specifier it goes with in the format string, it's fine. So if you write
double f = 3.14;
printf("%f\n", f);
it works. But if you write
printf("%d\n", f); /* WRONG */
it doesn't work. %d expects an int, but you passed a double. In this case (because printf is special), there's no good way for the compiler to insert an automatic conversion. So, instead, it just fails to work.
And when it "fails", it really fails. You don't even necessarily get anything "reasonable", like an integer representing the bit pattern of the IEEE-754 floating-point number you thought you passed. If you want to inspect the bit pattern of a float or double, you'll have to do that another way.
If what you really wanted to do was to see the bits and bytes making up a float, here's a completely different way:
float test = 3.14;
unsigned char *p = (unsigned char *)&test;
int i;
printf("bytes in %f:", test);
for(i = 0; i < sizeof(test); i++) printf(" %02x", p[i]);
printf("\n");
There are some issues here with byte ordering ("endianness"), but this should get you started.
To print hex (ie how it is represented in the memory) representation of the float:
float TEST= 3.0f;
int y=0;
memcpy(&y, &TEST, sizeof(y));
printf("%x\n",y);
printf("%d\n",y);
or
union
{
float TEST;
int y;
}uf = {.y = 0};
uf.TEST = 3.0f;
printf("\n%x\n",(unsigned)uf.y);
printf("%d\n",uf.y);
Both examples assuming sizeof(float) <= sizeof(int) (if they are not equal I need to zero the integer)
And the result (same for both):
40400000
1077936128
As you can see it is completely different from your one.
https://godbolt.org/z/Kr61x6Kv3

Printing the result of operations on integers and floats

Consider the following C-program:
int main() {
int a =2;
float b = 2;
float c = 3;
int d = 3;
printf("%d %f %d %f %d %f %d %f\n", a/c, a/c, a/d, a/d, b/c, b/c, b/d, b/d);
printf("%d\n", a/c);
}
The output of this is:
0 0.666667 0 0.666667 2 0.666667 0 0.666667
539648
I can't make sense of this at all. Why does printing a/c as an integer give 0, while b/c gives 2? Aren't all integers promoted to floats in computations involving both floats and integers? So the answer should be 0 in both cases.
In the second line of the output I'm simply printing a/c as an integer, which gives a garbage value for some reason (even though it gives 0 when I print it in the first compound printf statement). Why is this happening?
You have undefined behaviour:
printf("%d %f %d %f %d %f %d %f\n", a/c, a/c, a/d, a/d, b/c, b/c, b/d, b/d);
The format specifier for printf must match the type of the provided parameter. As printf doesn't provide a parameter list with types, but only ... there is no implicit type conversion apart from standard type conversion.
If you have UB, basically anything can happen.
What is likely to happen is the following:
Depending on the format specifier, printf consumes a certain number of bytes from the calling parameters. This number of bytes matches the specified format type. If the number of bytes does not match the number of bytes passed as an argument, you are out of sync for all successive parameters.
And of course you do incorrect interpretation of the data.
For starters according to the C Standard the function main without parameters shall be declared like
int main( void )
If you have for example the following declarations
int a =2;
float c = 3;
and then call the function printf the following way
printf( "%d", a / c );
then behind the hood the following events occur.
The expression a / c has the type float due to the usual arithmetic conversions.
As the function printf is declared with the ellipsis notation then to the expression a / c of the type float there are applied the default argument promotions that convert the expression to the type double.
As result in this call there is an attempt to output an expression of the type double using conversion specifier %d designed for the type int. Hanse the call has undefined behavior.
From the C Standard (7.21.6.1 The fprintf function)
9 If a conversion specification is invalid, the behavior is
undefined.275) If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined.

Meaning of "%lf" place holder

Here is my small program where I intently put the place holder %lf in the second printf. Why the second printf has the same result as the first printf( both printf 1.3).
int main()
{
double f = 1.3;
long l = 1024L;
printf("f = %lf", f);
printf("l = %lf", l);
return 0;
}
It's Undefined behaviour if printf() has format specifier mismatch. %lf expects a double but you are passing a long int.
C11, 7.21.6.1 The fprintf function
9 If a conversion specification is invalid, the behavior is
undefined.282) If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined.
That said, what probably happens is that when you call printf() the first time, the value of f is passed in a floating point register or at a location in stack for double. The next time you call printf(), it reads from the same location due to the format specifier %lf. As opposed to reading from where the value of l is stored. If you swap the order of printf() calls, you would probably observe a different output. But this is all platform specific. Once your program invokes undefined behaviour, anything can happen. Basically, you can't expect it to do anything sensible and there is absolutely no guarantee about its behaviour.
Here if you change your code to this:
#include <stdio.h>
int main()
{
double f = 1.3;
long l = 1024L;
printf("f = %lf", f);
printf("l = %lf", (float)l);
return 0;
}
you will see that the output would be different. When you pass a long to be presented as double you should expect undefined behavior
You have a specifier mismatch. The value
long l = 1024L;
is interpreted as a double; and this happens to be approximately 1.3 (at least on your and my pc. This might be different on different architectures I think; depending on how long a "long" and a "double" are, and how they are represented internally.
As for the meaning of the %lf placeholder, you can see in the printf documentation that %f means: decimal floating point. the l length modifier has no influence on the %f specifier.
Conclusion: %lf = %f = decimal floating point

Convert int to double

I ran this simple program, but when I convert from int to double, the result is zero. The sqrt of the zeros then displays negative values. This is an example from an online tutorial so I'm not sure why this is happening. I tried in Windows and Unix.
/* Hello World program */
#include<stdio.h>
#include<math.h>
main()
{ int i;
printf("\t Number \t\t Square Root of Number\n\n");
for (i=0; i<=360; ++i)
printf("\t %d \t\t\t %d \n",i, sqrt((double) i));
}
Maybe this?
int number;
double dblNumber = (double)number;
The problem is incorrect use of printf format - use %g/%f instead of %d
BTW - if you are wondering what your code did here is some abridged explanation that may help you in understanding:
printf routine has treated your floating point result of sqrt as integer. Signed, unsigned integers have their underlying bit representations (put simply - it's the way how they are 'encoded' in memory, registers etc). By specifying format to printf you tell it how it should decipher that bit pattern in specific memory area/register (depends on calling conventions etc). For example:
unsigned int myInt = 0xFFFFFFFF;
printf( "as signed=[%i] as unsigned=[%u]\n", myInt, myInt );
gives: "as signed=[-1] as unsigned=[4294967295]"
One bit pattern used but treated as signed first and unsigned later. Same applies to your code. You've told printf to treat bit pattern that was used to 'encode' floating point result of sqrt as integer. See this:
float myFloat = 8.0;
printf( "%08X\n", *((unsigned int*)&myFloat) );
prints: "41000000"
According to single precision floating point encoding format.
8.0 is simply (-1)^0*(1+fraction=0)*2^(exp=130-127)=2*3=8.0 but printed as int looks like just 41000000 (hex of course).
sqrt() return a value of type double. You cannot print such a value with the conversion specifier "%d".
Try one of these two alternatives
printf("\t %d \t\t\t %f \n",i, sqrt(i)); /* use "%f" */
printf("\t %d \t\t\t %d \n",i, (int)sqrt(i)); /* cast to int */
The i argument to sqrt() is converted to double implicitly, as long as there is a prototype in scope. Since you included the proper header, there is no need for an explicit conversion.

Resources