C program : help about variable definition sequence - c

void main()
{
float x = 8.2;
int r = 6;
printf ( "%f" , r/4);
}
It is clearly odd that i am not explicitly typecasting the r ( of int type ) in the printf func to float. However if i change the sequence of declaring x and r and declare r first and then x i get different results(in this case it is a garbage value). Again i am not using x
in the program anywhere.. These are the things i meant to be wrong... i want to keep them the way they are. But when i excute the first piece of code i get 157286.375011 as result ( a garbage value ).
void main()
{
int r = 6;
float x = 8.2;
printf ( "%f" , r/4);
}
and if i execute the code above i get 0.000000 as result. i know results can go wrong because i am using %f in the printf when it should have been %d... the results may be wrong... but my question is why the results change when i change sequence of variable definitions. Should not it be the same whether right or wrong???
Why is this happening?

printf does not have any type checking. It relies on you to do that checking yourself, verifying that all of the types match the formatting specifiers.
If you don't do that, you enter into the realm of undefined behavior, where anything can happen. The printf function is trying to interpret the specified value in terms of the format specifier you used. And if they don't match, boom.
It's nonsense to specify %f for an int, but you already knew that...

f conversion specifier takes a double argument but you are passing an int argument. Passing an int argument to f conversion specifier is undefined behavior.
In this expression:
r / 4
both operands are of type int and the result is also of type int.
Here is what you want:
printf ("%f", r / 4.0);

When printf grabs the optional variables (i.e. the variables after the char * that tells it what to print), it has to get them off the stack. double is usually 64 bits (8 bytes) whereas int is 32 bits (4 bytes).
Moreover, floating point numbers have an odd internal structure as compared to integers.
Since you're passing an int in place of a double, printf is trying to get 8 bytes off the stack instead of four, and it's trying to interpret the bytes of a int as the bytes of a double.
So not only are you getting 4 bytes of memory containing no one knows what, but you're also interpreting that memory -- that's 4 bytes of int and 4 bytes of random stuff from nowhere -- as if it were a double.
So yeah, weird things are going to happen. When you re-compile (or even times re-run) a program that just wantonly picks things out of memory where it hasn't malloc'd and it hasn't stored, you're going to get unpredictable and wildly-changing values.
Don't do it.

Related

Why is this code printing 0?

void main()
{
clrscr();
float f = 3.3;
/* In printf() I intentionaly put %d format specifier to see
what type of output I may get */
printf("value of variable a is: %d", f);
getch();
}
In effect, %d tells printf to look in a certain place for an integer argument. But you passed a float argument, which is put in a different place. The C standard does not specify what happens when you do this. In this case, it may be there was a zero in the place printf looked for an integer argument, so it printed “0”. In other circumstances, something different may happen.
Using an invalid format specifier to printf invokes undefined behavior. This is specified in section 7.21.6.1p9 of the C standard:
If a conversion specification is invalid, the behavior is
undefined.282) If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined.
What this means is that you can't reliably predict what the output of the program will be. For example, the same code on my system prints -1554224520 as the value.
As to what's most likely happening, the %d format specifier is looking for an int as a parameter. Assuming that an int is passed on the stack and that an int is 4 bytes long, the printf function looks at the next 4 bytes on the stack for the value given. Many implementations don't pass floating point values on the stack but in registers instead, so it instead sees whatever garbage values happen to be there. Even if a float is passed on the stack, a float and an int have very different representations, so printing the bytes of a float as an int will most likely not give you the same value.
Let's look at a different example for a moment. Suppose I write
#include <string.h>
char buf[10];
float f = 3.3;
memset(buf, 'x', f);
The third argument to memset is supposed to be an integer (actually a value of type size_t) telling memset how many characters of buf to set to 'x'. But I passed a float value instead. What happens? Well, the compiler knows that the third argument is supposed to be an integer, so it automatically performs the appropriate conversion, and the code ends up setting the first three (three point zero) characters of buf to 'x'.
(Significantly, the way the compiler knew that the third argument of memset was supposed to be an integer was based on the prototype function declaration for memset which is part of the header <string.h>.)
Now, when you called
printf("value of variable f is: %d", f);
you might think the same thing happens. You passed a float, but %d expects an int, so an automatic conversion will happen, right?
Wrong. Let me say that again: Wrong.
The perhaps surprising fact is, printf is different. printf is special. The compiler can't necessarily know what the right types of the arguments passed to printf are supposed to be, because it depends on the details of the %-specifiers buried in the format string. So there are no automatic conversions to just the right type. It's your job to make sure that the types of the arguments you actually pass are exactly right for the format specifiers. If they don't match, the compiler does not automatically perform corresponding conversions. If they don't match, what happens is that you get crazy, wrong results.
(What does the prototype function declaration for printf look like? It literally looks like this: extern int printf(const char *, ...);. Those three dots ... indicate a variable-length argument list or "varargs", they tell the compiler it can't know how many more arguments there are, or what their types are supposed to be. So the compiler performs a few basic conversions -- such as upconverting types char and short int to int, and float to double -- and leaves it at that.)
I said "The compiler can't necessarily know what the right types of the arguments passed to printf are supposed to be", but these days, good compilers go the extra mile and try to figure it out anyway, if they can. They still won't perform automatic conversions (they're not really allowed to, by the rules of the language), but they can at least warn you. For example, I tried your code under two different compilers. Both said something along the lines of warning: format specifies type 'int' but the argument has type 'float'. If your compiler isn't giving you warnings like these, I encourage you to find out if those warnings can be enabled, or consider switching to a better compiler.
Try
printf("... %f",f);
That's how you print float numbers.
Maybe you only want to print x digits of f, eg.:
printf("... %.3f" f);
That will print your float number with 3 digits after the dot.
Please read through this list:
%c - Character
%d or %i - Signed decimal integer
%e - Scientific notation (mantissa/exponent) using e character
%E - Scientific notation (mantissa/exponent) using E character
%f - Decimal floating point
%g - Uses the shorter of %e or %f
%G - Uses the shorter of %E or %f
%o - Signed octal
%s - String of characters
%u - Unsigned decimal integer
%x - Unsigned hexadecimal integer
%X - Unsigned hexadecimal integer (capital letters)
%p - Pointer address
%n - Nothing printed
The code is printing a 0, because you are using the format tag %d, which represents Signed decimal integer (http://devdocs.io).
Could you please try
void main() {
clrscr();
float f=3.3;
/* In printf() I intentionaly put %d format specifier to see what type of output I may get */
printf("value of variable a is: %f",f);
getch();
}

Copying data from float to an int, without the data being casted

In the following code I try to copy data from a float f to an int i, bit for bit, without any conversion of the data. I cast the address of f to (int*), and dereference this address when I assign it to i. The thinking is that if the program sees &f as an int pointer, it won't do a type conversion when (int*) f is dereferenced and assigned to i. But this isn't working and I do not understand why.
void main(){
float f=3.0;
int i;
/* Check to make sure int and float have the same size on my machine */
printf("sizeof(float)=%d\n",sizeof(float)); /* prints "sizeof(float)=4" */
printf("sizeof(int)=%d\n",sizeof(int)); /* prints "sizeof(int)=4" */
/* Verify that &f and (int*) &f have the same value */
printf("&f = %p\n",&f); /* prints &f = 0x7ffc0670dff8 */
printf("(int*) &f = %p\n",(int*) &f); /* prints (int*) &f = 0x7ffc0670dff8 */
i=*((int*) &f);
printf("%f\n", i); /* prints 0.000000 (I would have expected 3.000000) */
return;
}
By assigning via typecasting you are copying the raw 4 bytes of data from one variable to another. The problem is that a 3 in a 32-bit floating point variable isn't stored like a 3 in an integer variable.
For example, a 3 in 64-bit float format on a Mac is stored as 0x4e808000. Assigning that into an integer will produce 1077936128.
See https://en.wikipedia.org/wiki/IEEE_floating_point or http://www.madirish.net/240 or https://users.cs.duke.edu/~raw/cps104/TWFNotes/floating.html
Creating two pointers of unrelated types to the same object violates the strict aliasing rule. You want to avoid this, as in complicated code it can cause the compiler to produce binaries that don't do what you want. It's also undefined behaviour.
The correct way to change the type of a bit pattern in C between int and float is to avoid pointers completely, and use a union:
union int_float { int i; float f; };
int ival = (union int_float){ .f = 4.5 }.i;
float fval = (union int_float){ .i = 45 }.f;
Results may vary slightly. Be sure to check that the sizes of your floating type and your integer type are identical, or data will be lost/garbage data generated.
Note that it is possible to produce values in this way that are not valid elements of the destination type (this is more or less what you're seeing above when non-zero integer values get interpreted as floating-point zeroes despite not having all-zero bit patterns), which could lead to undefined or unexpected behaviour further down the line. Be sure to verify the output.
after the conversion you obtain an integer stored in the variable i.
So if you wannt to print such a value you have to use:
printf("%d\n", i); /* prints 1077936128 */
Now the printf will interprets correctly the memory bits and print the correct value.
This is not a cast but a bit copy like you said.
Remember that ((int)&f) dereference that pointer into an int value. It won't do what you believe on machines where int and float have different sizes.
A different way to copy, if the sizes of int and float are identical, is:
memcpy(&i, &f, sizeof f);
After copying the four bytes contained in the memory area if we try to print the content as a sequence of four decimal values we can appreciate the redistribution just happened:
for floating:
3 160 0 3
for integer
0 204 204 0
This means that the four bytes are managed differently from the computer according to the type of rapresentation: int or float.

Inconsistent results while printing float as integer [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
print the float value in integer in C language
I am trying out a rather simple code like this:
float a = 1.5;
printf("%d",a);
It prints out 0. However, for other values, like 1.4,1.21, etc, it is printing out a garbage value. Not only for 1.5, for 1.25, 1.5, 1.75, 1.3125 (in other words, decimal numbers which can be perfectly converted into binary form), it is printing 0. What is the reason behind this? I found a similar post here, and the first answer looks like an awesome answer, but I couldn't discern it. Can any body explain why is this happening? What has endian-ness got to do with t?
you're not casting the float, printf is just interpreting it as an integer which is why you're getting seemingly garbage values.
Edit:
Check this example C code, which shows how a double is stored in memory:
int main()
{
double a = 1.5;
unsigned char *p = &a;
int i;
for (i=0; i<sizeof(double); i++) {
printf("%.2x", *(p+i));
}
printf("\n");
return 0;
}
If you run that with 1.5 it prints
000000000000f83f
If you try it with 1.41 it prints
b81e85eb51b8f63f
So when printf interprets 1.5 as an int, it prints zero because the 4 LSBs are zeros and some other value when trying with 1.41.
That being said, it is an undefined behaviour and you should avoid it plus you won't always get the same result it depends on the machine and how the arguments are passed.
Note: the bytes are reversed because this is compiled on a little indian machine which means the least significant byte comes first.
You don't take care about argument promotions. Because printf is a variadic function, the arguments are promoted:
C11 (n1570), § 6.5.2.2 Function calls
arguments that have type float are promoted to double.
So printf tries to interpret your double variable as an integer type. It leads to an undefined behavior. Just add a cast:
double a = 1.5;
printf("%d", (int)a);
Mismatch of arguments in printf is undefined beahivour
either typecast a or use %f
use this way
printf("%d",(int)a);
or
printf("%f",a);
d stands for : decimal. so, nevertheless a is float/double/integer/char,.... when you use : "%d", C will print that number by decimal. So, if a is integer type (integer, long), no problem. If a is char : because char is a type of integer, so, C will print value of char in ASCII.
But, the problem appears, when a is float type (float/double), just because if a is float type, C will have special way to read this, but not by decimal way. So, you will have strange result.
Why has this strange result ?
I just give a short explanation : in computer, real number is presented by two part: exponent and a mantissa. If you say : this is a real number, C will know which is exponent, which is mantissa. But, because you say : hey, this is integer. no difference between exponent part and mantissa part -> strange result.
If you want understand exactly, how can know which integer will it print (and of course, you can guess that). You can visit this link : represent FLOAT number in memory in C
If you don't want to have this trange result, you can cast int to float, so, it will print the integer part of float number.
float a = 1.5;
printf("%d",(int)a);
Hope this help :)

why i am not getting the expected output?

int main()
{
int x;
float y;
char c;
x = -4443;
y = 24.25;
c = 'M';
printf("\nThe value of integer variable x is %f", (float)x);
printf("\nThe value of float variable y is %d", y);
printf("\nThe value of character variable c is %f\n",c);
return 0;
}
Output:
The value of integer variable x is -4443.000000
The value of float variable y is 0
The value of character variable c is 24.250000
Why am I not getting the expected output?
But when I am using external casting I am getting expected output which is:
The value of integer variable x is -4443.000000
The value of float variable y is 24
The value of character variable c is 77.000000
why i am not getting the expected output ?
Short answer: Because your expectations are wrong.
You're instructing the compiler to read an integer from where y is. Which is wrong. Format specifier don't tell the compiler to do casts, just what type to expect, and trust you to provide the right type.
The behaviour can be due to the fact that, for example, a float is stored in 8 bytes. The high-order bytes will be 0 in this case. But an int is stored in 4 bytes. So you tell the compiler read the int from where y is, it reads the first 4 bytes, which are 0, and prints 0...
EDIT: As John pointed out in the comments, this is UB, which means that anything can happen:
7.21.6.1/9
If a conversion specification is invalid, the behavior is undefined.282) If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
Many computing platforms pass different types of arguments in different ways. On some platforms, floating-point arguments are passed in special floating-point registers. On most platforms, integer arguments are passed in general processor registers. Large arguments, such as structures, are stored somewhere in memory, and a pointer is passed instead (invisibly to the C source code). Once the few registers available for arguments are used, the remaining arguments are typically passed on the stack.
When you call printf, the compiler does not match the arguments you pass to the conversion specifiers in the format string. (Except that a good compiler will check and issue a warning if the types do not match.) In order to operate, the printf routine reads the format string and, when it finds a conversion specifier, it reads data from where the corresponding argument should be. If you specify “%d” but pass a float, the printf routine may read data from a general processor register, but the float value is in a floating-point register. Therefore, the value that is printed will be whatever data happened to be in the general processor register.
Similarly, when you specify “%f” but pass an integer, the printf routine may read from a floating-point register, but the integer value is in a general processor register.
The compiler will not convert printf arguments to the target type and might not warn you about the mismatches. You must match the conversion specifiers in the format string to the argument types.
Bonus: Here are documents describing how arguments are passed to subroutines on one platform (Mac OS X).
You cannot format a char as a float "%f", use "%c" or "%d" instead. I find that http://www.cplusplus.com/reference/clibrary/cstdio/printf/ is a good reference.
The format specifiers and the types of the arguments don't match, which I believe causes undefined behavior. printf doesn't do casting for you, so you have to explicitly cast the arguments.

problem with function printf()

Here is my program:
#include <stdio.h>
int main()
{
int a=0x09;
int b=0x10;
unsigned long long c=0x123456;
printf("%x %llx\n",a,b,c);//in "%llx", l is lowercase of 'L', not digit 1
return 0;
}
the output was:
9 12345600000010
I want to know:
how function printf() is executed?
what will happen if the number of arguments isn't equal to that of formats?
please help me and use this program as an example to make an explanation.
The problem is that your types don't match. This is undefined behavior.
Your second argument b does not match the type of the format. So what's happening is that printf() is reading past the 4 bytes holding b (printf is expecting an 8-byte operand, but b is only 4 bytes). Therefore you're getting junk. The 3rd argument isn't printed at all since your printf() only has 2 format codes.
Since the arguments are usually passed consecutively (and adjacent) in memory, the 4 extra bytes that printf() is reading are actually the lower 4 bytes of c.
So in the end, the second number that's being printed is equal to b + ((c & 0xffffffff) << 32).
But I want to reiterate: this behavior is undefined. It's just that most systems today behave like this.
If the arguments that you pass to printf don't match the format specification then you get undefined behavior. This means that anything can happen and you cannot reason about the results that you happen to see on your specific system.
In your case, %llx requires and argument of type unsigned long long but you supplied an int. This alone causes undefined behaviour.
It is not an error to pass more arguments to printf than there are format specificiers, the excess arguments are evaluated but ignored.
printf() increases a pointer to read an argument at a time according to the format. If the number of formatting arguments is larger than the number of parameters, then printf() will output data from unknown memory locations. But if the number of parameters is larger than the number of formatting arguments, then no harm was done. E.g. gcc will warn you if the number of formatting arguments and parameters don't match.

Resources