printf function in c [duplicate] - c

This question already has answers here:
float to int unexpected behaviour
(6 answers)
Closed 6 years ago.
here are part of my code.
float a = 12.5;
printf("%d\n", a);
printf("%d\n", (int)a);
printf("%d\n", *(int *)&a);
when I compile in windows, I got:
0
12
1094713344
and then, I compile in linux, I got:
-1437851864
12
1094713344
-1437851864 will be changed every time I excuted it.
my question is: in how does the "printf" function works in linux

It works very well, but why are you passing the wrong sort of data to it? The %d specifier expects and int, but you're passing something else. Bad idea.
If float and int are differently sized across the varargs barrier, this is undefined behavior. And since float is typically promoted to double with varargs calls, if your int is smaller than your double this will break.
In short, this is really bad and broken code. Don't do this.

To print a floating point number in C, you should do:
float a = 12.5;
printf("%f\n", a);

As has been mentioned, passing arguments with types not matching the format string invokes undefined behaviour, so the language standard doesn't place any restrictions on what
float a = 12.5;
printf("%d\n", a);
actually does.
To find out what it does, you'd need to analyse your implementation, or at least the assembly the compiler produced for that code.
A common way for translating that code is to pass the promoted (to double) float argument in a floating point register and tell printf how many arguments are passed in floating point registers. But since the format tells printf to look for an int, it doesn't look in a floating point register for it, but in another register. So the printed value would be whatever happened to be in that register when printf was called.

Related

C: What happens technically when int type is stored in long long type? [duplicate]

This question already has answers here:
Undefined, unspecified and implementation-defined behavior
(9 answers)
What happens when I use the wrong format specifier?
(2 answers)
Wrong format specifiers in scanf (or) printf
(2 answers)
Closed 1 year ago.
#include <stdio.h>
int main() {
long long a, b;
scanf("%d %d", &a, &b);
printf("%lld", a + b);
return 0;
}
The code above reads two numbers and prints the sum of them.
I know the precise one should use format specifier %lld rather than %d, which I believe is the cause of compiling error.
However, the problem is that some compilers, such as https://www.programiz.com/c-programming/online-compiler/, execute the code without any syntax error but print awkward value like below, which I don't get it at all.
Input: 123 -123
Output: 235046380240896 (This value consistently changes)
What is happening on the foundational level when int type is stored in long long type?
Formally it is undefined behavior, since the format specifiers don't match the type. So anything can happen in theory. Compilers aren't required to give diagnostics for mismatching format strings vs variables provided.
In practice, many (32/64 bit) compilers likely read 4 bytes and place them in the 4 least significant positions (little endian) of the the long long, whereas the upper 4 most significant bytes maintain their indeterminate values - that is, garbage, since the variable was never initialized.
So in practice if you initialize the long long to zero you might actually get the expected output, but there are no guarantees for it.
This is undefined behavior, so anything could happen.
What's most likely happening in practice is that scanf() is storing the numbers you enter into 32 bits halves of each 64-bit long long variable. Since you never initialized the variable, the other halves contain unpredictable values. This is why you're getting a different result each time.

Why does printf print an integer as a double? [duplicate]

This question already has answers here:
Why does printf("%f",0); give undefined behavior?
(10 answers)
Closed 2 years ago.
printf("%f", 1.0); //prints 1.0
but
printf("%f", 1); // prints 0.0
How did the conversion happen?
printf("%f", 1); causes undefined behavior, because double is expected, but you passed an int. There is no explanation of why it prints 0.0, because the behavior is undefined.
As per the below #Eric Postpischil's comment different.
The first double argument (or float argument, which will be promoted to double if used with the ... part of a function) is put in %xmm0. The first “normal integer class” type of argument would go into %rdi. For printf, though, that pointer to the format string is the first argument of that class, so it does into %rdi. That means the first int argument passed goes into the second register for that class, which is %rsi. So, with printf("%f", 1);, printf looks for the floating-point value in %xmm0, but the caller puts an int 1 in %rsi
Not every compiler behaves like this, some actually print 1.0. But when instruct printf to print a double value, you must pass it a double value, not an integer. You can always use a type cast:
printf("%f", (double)1);
The question is not about printf function itself, the question is if the compiler is smart enough. If your compiler is not smart enough, then it treats printf as just a normal function call and does not know anything about the syntax of arguments for this function. So it just puts a string and an integer number on the stack and calls the function. The printf function takes the first argument and starts to parse it as a format string. When it sees format specifier %f it attempts to interpret the corresponding part of the memory at the stack as a floating point number. It has no way to know that compilator pushed int value there before. So printf does it best to interpret the memory as a floating point number. The result is platform dependent, i.e. on endiness and float/int sizes, and also includes randomness, because you'll most probably hit some garbage on the stack. The transformation done by printf in this case can be seen also like this:
int i = 1; // Integer variable
int* pi = &i; // Pointer to i
float* pf = (float*)pi; // Reinterpret the pointer as floating point number address
float f = *pf; // Get the floating point from this address
printf("%f\n", f);
The thing here printf() will except to receive float based on the format you passed in, to print int as float in printf() you have to cast it
printf("%f", (float)1);
or
printf("%f",(double)1);
because C will treat the variables passed to printf() based on their types and memory representation and you pass the wrong value it will result in undefined behavior.

c-behaviour of printf("%d", <double>) [duplicate]

This question already has answers here:
Given the state of the stack and registers, can we predict the outcome of printf's undefined behavior
(2 answers)
Closed 6 years ago.
Why does this code prints nonsense values? If it makes sense then what is it?
printf("%d\n", 5.0 / 4);
By the way, I know about format specifiers I should be using %f instead of %d. but I want to know what c actually does.
Strangely, every time I run the compiled program, it prints a different thing. doesn't it have to be deterministic?
As far as I could observe, this code prints a similar thing:
float c;
printf("%d\n", &c);
are they any related?
and when i tried:
float c;
printf("%d\n%d\n", c, &c);
There is a constant 252 between those two values. 256 - sizeof(float) maybe?
and declaring c as a double makes the difference 0.
Thanks in advance!
UPDATE: writing the same code on different machines yielded different results.(252 being 56. former is a 64-bit ubuntu machine and latter is 64-bit OS X)
printf is unusual in that it has no single function signature. In other words, there's no mechanism to automatically take the actual arguments you pass and convert them into the type that's expected. Whatever you try to pass, gets passed. If the type you pass does not match the type expected by the corresponding format specifier, you get nonsense.
Furthermore, if the mismatched types you're passing versus what the format specifiers are expecting have different sizes, printf can end up popping random garbage off the stack, that has nothing to do with what you thought you passed.
You asked, "doesn't it have to be deterministic?", and the answer there is a very emphatic "NO"! By passing the wrong type of value to printf, you have invoked undefined behavior, which means that anything can happen, and it very definitely does not have to be the same thing twice.
(Even though they're not required to, some compilers peek at the format string and try to check the values you passed against the values that are expected. For example, under gcc, your first code fragment gets the warning "format ‘%d’ expects type ‘int’, but argument 2 has type ‘double’". But I don't know of a compiler that will try to do conversions to "fix" the arguments.)

Printing an integer as a Floaing point number [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is printf’s behaviour when printing an int as float?
int main()
{
int x=4;
int y=987634;
printf("%f %f",x,y);
}
On compiling this code i get an output as 0.000000 0.000000. Shouldn't there be a type promotion of x and y to floating point numbers? Shouldn't the O/P be 4.000000 and 987634.000000?
Can anyone help me with this. Thanx in Advance.
Conversions happen to arguments to functions with a prototype which includes the specific parameters. The prototype for printf() does not include the specific parameters after the first one
int printf(const char *format, ...);
so, no arguments after the 1st one get automatically converted except as defined by "default argument conversions" (basically any integer type with a rank lower than int to int and any floating-point type with a rank lower than double to double (thank you, Pascal Cuoq)). You need to convert them explicitly yourself with a cast operation
printf("%f %f\n", (double)x, (double)y);
Ohhh ... and you really, really, really should include the header that has the prototype in question (under penalty of Undefined Behaviour)
#include <stdio.h>
The compiler has no idea that your printf format string is going to interpret the arguments as floats. It passes them straight through as ints.
Because printf is a varargs function, it's really up to you to pass parameters that make sense.
Try printf("%i %i",x,y); to print integers as 4 987634. For printf formatting details see http://www.cplusplus.com/reference/cstdio/printf/.
ints and floats are stored differently in memory, but your compiler does not know that you want floats. You need to convert them explicitly.
printf("%f %f",(float)x,(float)y);
Variadic functions (printf() is one) aren't type checked because variadic signatures don't contain any type information. Thus, there's no implicit type casting. You have to do it manually:
printf("%f %f", (double)x, (double)y);

Inconsistent results while printing float as integer [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
print the float value in integer in C language
I am trying out a rather simple code like this:
float a = 1.5;
printf("%d",a);
It prints out 0. However, for other values, like 1.4,1.21, etc, it is printing out a garbage value. Not only for 1.5, for 1.25, 1.5, 1.75, 1.3125 (in other words, decimal numbers which can be perfectly converted into binary form), it is printing 0. What is the reason behind this? I found a similar post here, and the first answer looks like an awesome answer, but I couldn't discern it. Can any body explain why is this happening? What has endian-ness got to do with t?
you're not casting the float, printf is just interpreting it as an integer which is why you're getting seemingly garbage values.
Edit:
Check this example C code, which shows how a double is stored in memory:
int main()
{
double a = 1.5;
unsigned char *p = &a;
int i;
for (i=0; i<sizeof(double); i++) {
printf("%.2x", *(p+i));
}
printf("\n");
return 0;
}
If you run that with 1.5 it prints
000000000000f83f
If you try it with 1.41 it prints
b81e85eb51b8f63f
So when printf interprets 1.5 as an int, it prints zero because the 4 LSBs are zeros and some other value when trying with 1.41.
That being said, it is an undefined behaviour and you should avoid it plus you won't always get the same result it depends on the machine and how the arguments are passed.
Note: the bytes are reversed because this is compiled on a little indian machine which means the least significant byte comes first.
You don't take care about argument promotions. Because printf is a variadic function, the arguments are promoted:
C11 (n1570), § 6.5.2.2 Function calls
arguments that have type float are promoted to double.
So printf tries to interpret your double variable as an integer type. It leads to an undefined behavior. Just add a cast:
double a = 1.5;
printf("%d", (int)a);
Mismatch of arguments in printf is undefined beahivour
either typecast a or use %f
use this way
printf("%d",(int)a);
or
printf("%f",a);
d stands for : decimal. so, nevertheless a is float/double/integer/char,.... when you use : "%d", C will print that number by decimal. So, if a is integer type (integer, long), no problem. If a is char : because char is a type of integer, so, C will print value of char in ASCII.
But, the problem appears, when a is float type (float/double), just because if a is float type, C will have special way to read this, but not by decimal way. So, you will have strange result.
Why has this strange result ?
I just give a short explanation : in computer, real number is presented by two part: exponent and a mantissa. If you say : this is a real number, C will know which is exponent, which is mantissa. But, because you say : hey, this is integer. no difference between exponent part and mantissa part -> strange result.
If you want understand exactly, how can know which integer will it print (and of course, you can guess that). You can visit this link : represent FLOAT number in memory in C
If you don't want to have this trange result, you can cast int to float, so, it will print the integer part of float number.
float a = 1.5;
printf("%d",(int)a);
Hope this help :)

Resources