This question already has answers here:
float to int unexpected behaviour
(6 answers)
Closed 6 years ago.
#include <stdio.h>
int main()
{
float a = 12.5;
printf("%d\n", a);
printf("%d\n", *(int *)&a);
return 0;
}
Additionally, how do you interpret the expression *(int *)&a?
It takes the address of a float, casts it to an integer pointer and then dereferences that as an integer. Totally wrong.
There are at least two things wrong here:
Nobody says the pointers for an int and a float need to be the same size
The representation for a float looks nothing like the representation for a signed int
So the output to the second printf (if it doesn't happen to crash since it's undefined behavior, as per the first point) would likely be some strange, huge number.
The author of this code is trying to take the bits of a float and reinterpret them as an int, but *(int *)&a invokes undefined behavior and modern compilers likely will not do what the author of the code intended. Passing an argument of the wrong type to printf is even worse undefined behavior; it definitely will not work on modern archs like x86_64. Instead you could use:
#include <stdio.h>
int main()
{
float a = 12.5;
int b;
memcpy(&b, &a, sizeof b);
printf("%d\n", b);
return 0;
}
to get the desired effect.
My compiler flags this and issues a stern warning.
$ make foolish
cc foolish.c -o foolish
foolish.c: In function ‘main’:
foolish.c:5: warning: format ‘%d’ expects type ‘int’, but argument 2 has type ‘double’
$ ./foolish
1606416928
1095237632
$
What is the reason for wanting to do this?
If you make that line printf("%08X\n", *(unsigned int *)&a); then it prints the binary representation of the floating point number. See here for details.
The size of size(float) and sizeof(double) tend to be some multiple of at least sizeof(short), and often sizeof(int) (to meet different needs even the IEEE floating point standards allow multiple floating point formats).
Assuming the previously mentioned undefined pointer behaviors do not kill the program, in most environments this will just take the bits of one of those ints and print it out in isolation. The internals of floats assure that this will be vastly different from the original floating point value.
printf( "%d sizeof(short)\n", int(sizeof(short)) );
printf( "%d sizeof(int)\n", int(sizeof(int)) );
printf( "%d sizeof(float)\n", int(sizeof(float)) );
printf( "%d sizeof(double)\n", int(sizeof(double)) );
printf( "%d sizeof(long double)\n", int(sizeof(long double)) );
Related
I am a newbie to the C language. When I was learning floating point numbers today, I found the following problems.
float TEST= 3.0f;
printf("%x\n",TEST);
printf("%d\n",TEST);
first output:
9c9e82a0
-1667333472
second output:
61ea32a0
1642738336
As shown above, each execution will output different results. I have checked a lot of IEEE 754 format and still don't understand the reasons. I would like to ask if anyone can explain or provide keywords for me to study, thank you.
-----------------------------------Edit-----------------------------------
Thank you for your replies. I know how to print IEEE 754 bit pattern. However, as Nate Eldredge, chux-Reinstate Monica said, using %x and %d in printf is undefined behavior. If there is no floating point register in our device, how does it work ? Is this described in the C99 specification?
Most of the time, when you call a function with the "wrong kind" (wrong type) of argument, an automatic conversion happens. For example, if you write
#include <stdio.h>
#include <math.h>
printf("%f\n", sqrt(144));
this works just fine. The compiler knows (from the function prototype in <math.h>) that the sqrt function expects an argument of type double. You passed it the int value 144, so the compiler automatically converted that int to double before passing it to sqrt.
But this is not true for the printf function. printf accepts arguments of many different types, and as long as each argument is right for the particular % format specifier it goes with in the format string, it's fine. So if you write
double f = 3.14;
printf("%f\n", f);
it works. But if you write
printf("%d\n", f); /* WRONG */
it doesn't work. %d expects an int, but you passed a double. In this case (because printf is special), there's no good way for the compiler to insert an automatic conversion. So, instead, it just fails to work.
And when it "fails", it really fails. You don't even necessarily get anything "reasonable", like an integer representing the bit pattern of the IEEE-754 floating-point number you thought you passed. If you want to inspect the bit pattern of a float or double, you'll have to do that another way.
If what you really wanted to do was to see the bits and bytes making up a float, here's a completely different way:
float test = 3.14;
unsigned char *p = (unsigned char *)&test;
int i;
printf("bytes in %f:", test);
for(i = 0; i < sizeof(test); i++) printf(" %02x", p[i]);
printf("\n");
There are some issues here with byte ordering ("endianness"), but this should get you started.
To print hex (ie how it is represented in the memory) representation of the float:
float TEST= 3.0f;
int y=0;
memcpy(&y, &TEST, sizeof(y));
printf("%x\n",y);
printf("%d\n",y);
or
union
{
float TEST;
int y;
}uf = {.y = 0};
uf.TEST = 3.0f;
printf("\n%x\n",(unsigned)uf.y);
printf("%d\n",uf.y);
Both examples assuming sizeof(float) <= sizeof(int) (if they are not equal I need to zero the integer)
And the result (same for both):
40400000
1077936128
As you can see it is completely different from your one.
https://godbolt.org/z/Kr61x6Kv3
This question already has answers here:
Using %f to print an integer variable
(6 answers)
Closed 3 years ago.
I want to know why sizeof doesn't work with different types of format specifiers.
I know that sizeof is usually used with the %zu format specifier, but I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%d\n", sizeof(a + d)); // prints normal size of expression
printf("%lf\n", sizeof(s)); // prints big number
printf("%f", sizeof(d)); // prints nan
sizeof evaluates to a value of type size_t. The proper specifier for size_t in C99 is %zu. You can use %u on systems where size_t and unsigned int are the same type or at least have the same size and representation. On 64-bit systems, size_t values have 64 bits and therefore are larger than 32-bit ints. On 64-bit linux and OS/X, this type is defined as unsigned long and on 64-bit Windows as unsigned long long, hence using %lu or %llu on these systems is fine too.
Passing a size_t for an incompatible conversion specification has undefined behavior:
the program could crash (and it probably will if you use %s)
the program could display the expected value (as it might for %d)
the program could produce weird output such as nan for %f or something else...
The reason for this is integers and floating point values are passed in different ways to printf and they have a different representation. Passing an integer where printf expects a double will let printf retrieve the floating point value from registers or memory locations that have random contents. In your case, the floating point register just happens to contain a nan value, but it might contain a different value elsewhere in the program or at a later time, nothing can be expected, the behavior is undefined.
Some legacy systems do not support %zu, notably C runtimes by Microsoft. On these systems, you can use %u or %lu and use a cast to convert the size_t to an unsigned or an unsigned long:
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%u\n", (unsigned)sizeof(a + d)); // should print 8
printf("%lu\n", (unsigned long)sizeof(s)); // should print 4
printf("%llu\n", (unsigned long long)sizeof(d)); // prints 4 or 8 depending on the system
I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
Several reasons.
First of all, printf doesn't know the types of the additional arguments you actually pass to it. It's relying on the format string to tell it the number and types of additional arguments to expect. If you pass a size_t as an additional argument, but tell printf to expect a float, then printf will interpret the bit pattern of the additional argument as a float, not a size_t. Integer and floating point types have radically different representations, so you'll get values you don't expect (including NaN).
Secondly, different types have different sizes. If you pass a 16-bit short as an argument, but tell printf to expect a 64-bit double with %f, then printf is going to look at the extra bytes immediately following that argument. It's not guaranteed that size_t and double have the same sizes, so printf may either be ignoring part of the actual value, or using bytes from memory that isn't part of the value.
Finally, it depends on how arguments are being passed. Some architectures use registers to pass arguments (at least for the first few arguments) rather than the stack, and different registers are used for floats vs. integers, so if you pass an integer and tell it to expect a double with %f, printf may look in the wrong place altogether and print something completely random.
printf is not smart. It relies on you to use the correct conversion specifier for the type of the argument you want to pass.
This question already has answers here:
printf specify integer format string for float
(7 answers)
Closed 5 years ago.
I wrote this very simple and short code, but it doesn't work: when I compile and execute the returned value from the function calculateCharges() is 0 when I'm expecting 2.
Can anybody explain why, please?
#include <stdio.h>
#include <stdlib.h>
float calculateCharges(float timeIn);
int main()
{
printf("%d", calculateCharges(3.0));
return 0;
}
float calculateCharges(float timeIn)
{
float Total;
if(timeIn <= 3.0)
Total = 2.0;
return Total;
}
There are at least three problems here, two of which should be easily noticeable if you enable compiler warnings (-Wall command-line option), and which lead to undefined behavior.
One is wrong format specifier in your printf statement. You're printing a floating point value wirh %d, the format specifier for signed integer. The correct specifier is %f.
The other is using uninitialized value. The variable Total is potentially uninitialized if the if statement in your function isn't gone through, and the behavior of such usage is undefined.
From my point of view, it's likely the wrong format specifier that caused the wrong output. But it's also recommended that you fix the second problem described above.
The third problem has to do with floating point precision. Casting values between float and double may not be a safe round-trip operation.
Your 3.0 double constant is cast to float when passed to calculateCharges(). That value is then cast up to a double in the timeIn <= 3.0 comparison (to match the type of 3.0).
It's probably okay with a value like 3.0 but it's not safe in the general case. See, for example, this piece of code which exhibits the problem.
#include <stdio.h>
#define EPI 2.71828182846314159265359
void checkDouble(double x) {
printf("double %s\n", (x == EPI) ? "okay" : "bad");
}
void checkFloat(float x) {
printf("float %s\n", (x == EPI) ? "okay" : "bad");
}
int main(void) {
checkFloat(EPI);
checkDouble(EPI);
return 0;
}
You can see from the output that treating it as double always is okay but not so when you cast to float and lose precision:
float bad
double okay
Of course, the problem goes away if you ensure you always use and check against the correct constant types, such as by using 3.0F.
%d will print integers.
Total is a float, so it will not work.
You must use the proper specifier for a float.
(You should research that yourself, rather than have us give you the answer)
I am trying to learn C and am very confused already.
#include <stdio.h>
int main(void)
{
int a = 50000;
float b = 'a';
printf("b = %f\n", 'a');
printf("a = %f\n", a);
return 0;
}
The above code produces a different output each time with gcc. Why?
You pass an int value ('a') for a %f format expecting a float or a double. This is undefined behavior, which can result in different output for every execution of the same program. The second printf has the same problem: %f expects a float or double but you pass an int value.
Here is a corrected version:
#include <stdio.h>
int main(void) {
int a = 50000;
float b = 'a';
printf("a = %d\n", a);
printf("b = %f\n", b);
printf("'a' = %d\n", 'a');
return 0;
}
Output:
a = 50000
b = 97.000000
'a' = 97
Compiling with more warnings enabled, with command line arguments -Wall -W -Wextra lets the compiler perform more consistency checks and complain about potential programming errors. It would have detected the errors in the posted code.
Indeed clang still complains about the above correction:
clang -O2 -std=c11 -Weverything fmt.c -o fmt
fmt.c:8:24: warning: implicit conversion increases floating-point precision: 'float' to 'double' [-Wdouble-promotion]
printf("b = %f\n", b);
~~~~~~ ^
1 warning generated.
b is promoted to double when passed to printf(). The double type has more precision than the float type, which might output misleading values if more decimals are requested than the original type afforded.
It is advisable to always use double for floating point calculations and reserve the float type for very specific cases where it is better suited, such as computer graphics APIs, some binary interchange formats...
From implementation standpoint, passing floating point numbers (that what %f expects) as variable argument lists (that is what ... means in printf prototype) and integers (that is what 'a' is, specifically of type int) may use different registers and memory layout.
This is usually defined by ABI calling conventions. Specifically, in x86_64, %xmm0 will be read by printf (with unitialized value), but gcc will fill %rdi in printf call.
See more in System V Application Binary Interface AMD64 Architecture Processor Supplement, p. 56
You should note that C is a very low-level language which puts a lot of confusing cases (including integer overflows and underflows, unitialized variables, buffer overruns) on shoulders of implementation. That allows to gain maximum performance (by avoiding lots of checks), but leaves to errors such as this error.
The %f specifier needs to be matched by a floating-point parameter (float or double), you're giving it ints instead. Not matching the type is undefined behaviour, meaning it could do anything, including printing different results everytime.
Hope this helped, good luck :)
Edit: Thanks chqrlie for the clarifications!
Unpredictable behavior.
When you try to print value using mismatch format specifier, compiler give an unpredictable output.The behavior on using %f as the format specifier for an char and int is undefined.
Use correct format specifier in your program according yo data type:
printf("%d\n", 'a'); // print ASCII value
printf("%d\n", a);
First of all, gcc should give a warning for the above code. This is because of the mismatch in the format specifier.
Mismatched formatting (printf() as well as scanf()) will give unpredictable behavior in C. This is in spite of the fact that gcc is expected to take care possible type conversions like int to float implicitly.
Here is are two nice references.
https://stackoverflow.com/a/1057173/4954434
https://stackoverflow.com/a/12830110/4954434
Following will work as expetced.
#include <stdio.h>
int main(void)
{
int a = 50000;
float b = 'a';
printf("b = %f\n", (float)'a');
printf("a = %f\n", (float)a);
return 0;
}
The main difficult you are having is the data type you are using.
When you create a variable you are telling the size the memory will have to reserve to work properly. Like char have a size of 1 byte, int equal 4 byte and float 32 byte. It's important to use the same data type to not have unpredictable result.
char a = 'a';
printf("%c",a);
int b = 50000;
printf("%d",b);
float c = 5.7;
printf("%f", c);
For more information: C-Variables
This question already has answers here:
float to int unexpected behaviour
(6 answers)
Closed 6 years ago.
here are part of my code.
float a = 12.5;
printf("%d\n", a);
printf("%d\n", (int)a);
printf("%d\n", *(int *)&a);
when I compile in windows, I got:
0
12
1094713344
and then, I compile in linux, I got:
-1437851864
12
1094713344
-1437851864 will be changed every time I excuted it.
my question is: in how does the "printf" function works in linux
It works very well, but why are you passing the wrong sort of data to it? The %d specifier expects and int, but you're passing something else. Bad idea.
If float and int are differently sized across the varargs barrier, this is undefined behavior. And since float is typically promoted to double with varargs calls, if your int is smaller than your double this will break.
In short, this is really bad and broken code. Don't do this.
To print a floating point number in C, you should do:
float a = 12.5;
printf("%f\n", a);
As has been mentioned, passing arguments with types not matching the format string invokes undefined behaviour, so the language standard doesn't place any restrictions on what
float a = 12.5;
printf("%d\n", a);
actually does.
To find out what it does, you'd need to analyse your implementation, or at least the assembly the compiler produced for that code.
A common way for translating that code is to pass the promoted (to double) float argument in a floating point register and tell printf how many arguments are passed in floating point registers. But since the format tells printf to look for an int, it doesn't look in a floating point register for it, but in another register. So the printed value would be whatever happened to be in that register when printf was called.