Pointer not giving expected output in c - c

Why doesn't the double variable show a garbage value?
I know I am playing with pointers, but I meant to. And is there anything wrong with my code? It threw a few warnings because of incompatible pointer assignments.
#include "stdio.h"
double y= 0;
double *dP = &y;
int *iP = dP;
void main()
{
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10lf %#10lf \n",y,*dP,*iP,*(iP+1));
scanf("%lf %d %d",&y,iP,iP+1);
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10d %#10d \n",y,*dP,*iP,*(iP+1));
}

Welcome to Stack Overflow. It's not very clear what you're trying to do with this code, but the first thing I'll say is that it does exactly what it says it does. It tries to format data with the wrong format string. The result is garbage, but that doesn't necessarily mean it will look like garbage.
If part of the idea is to print out the internal bit pattern of a double in hexadecimal, you can do that--but the code will be implementation-dependent. The following should work on just about any modern 32 or 64-bit desktop implementation using 64-bits for both double and long long int types:
double d = 3.141592653589793238;
printf("d = %g = 0x%016llX\n", d, *(long long*)&d);
The %g specification is a quick way to print out a double in (usually) easily readable form. The %llX format prints an unsigned long long int in hexadecimal. The byte order is implementation-dependent; even if you know that both double and long long int have the same number of bits. On a Mac, PC or other Intel/AMD architecture machine, you'll get the display in most-significant-digit-first order.
The *(long long *)&d expression (reading from right to left) will take the address of d, convert that double* pointer to a long long * pointer, then dereference that pointer to get a long long value to format.
Almost every implementation uses IEEE 754 format for hardware floating point this century.
64-bit IEEE format (aka double)
You can find out more about printf formatting at:
http://www.cplusplus.com/reference/cstdio/printf/

Related

The problem about printf function to "output float with %d" in C

I am a newbie to the C language. When I was learning floating point numbers today, I found the following problems.
float TEST= 3.0f;
printf("%x\n",TEST);
printf("%d\n",TEST);
first output:
9c9e82a0
-1667333472
second output:
61ea32a0
1642738336
As shown above, each execution will output different results. I have checked a lot of IEEE 754 format and still don't understand the reasons. I would like to ask if anyone can explain or provide keywords for me to study, thank you.
-----------------------------------Edit-----------------------------------
Thank you for your replies. I know how to print IEEE 754 bit pattern. However, as Nate Eldredge, chux-Reinstate Monica said, using %x and %d in printf is undefined behavior. If there is no floating point register in our device, how does it work ? Is this described in the C99 specification?
Most of the time, when you call a function with the "wrong kind" (wrong type) of argument, an automatic conversion happens. For example, if you write
#include <stdio.h>
#include <math.h>
printf("%f\n", sqrt(144));
this works just fine. The compiler knows (from the function prototype in <math.h>) that the sqrt function expects an argument of type double. You passed it the int value 144, so the compiler automatically converted that int to double before passing it to sqrt.
But this is not true for the printf function. printf accepts arguments of many different types, and as long as each argument is right for the particular % format specifier it goes with in the format string, it's fine. So if you write
double f = 3.14;
printf("%f\n", f);
it works. But if you write
printf("%d\n", f); /* WRONG */
it doesn't work. %d expects an int, but you passed a double. In this case (because printf is special), there's no good way for the compiler to insert an automatic conversion. So, instead, it just fails to work.
And when it "fails", it really fails. You don't even necessarily get anything "reasonable", like an integer representing the bit pattern of the IEEE-754 floating-point number you thought you passed. If you want to inspect the bit pattern of a float or double, you'll have to do that another way.
If what you really wanted to do was to see the bits and bytes making up a float, here's a completely different way:
float test = 3.14;
unsigned char *p = (unsigned char *)&test;
int i;
printf("bytes in %f:", test);
for(i = 0; i < sizeof(test); i++) printf(" %02x", p[i]);
printf("\n");
There are some issues here with byte ordering ("endianness"), but this should get you started.
To print hex (ie how it is represented in the memory) representation of the float:
float TEST= 3.0f;
int y=0;
memcpy(&y, &TEST, sizeof(y));
printf("%x\n",y);
printf("%d\n",y);
or
union
{
float TEST;
int y;
}uf = {.y = 0};
uf.TEST = 3.0f;
printf("\n%x\n",(unsigned)uf.y);
printf("%d\n",uf.y);
Both examples assuming sizeof(float) <= sizeof(int) (if they are not equal I need to zero the integer)
And the result (same for both):
40400000
1077936128
As you can see it is completely different from your one.
https://godbolt.org/z/Kr61x6Kv3

Sizeof with different specificators [duplicate]

This question already has answers here:
Using %f to print an integer variable
(6 answers)
Closed 3 years ago.
I want to know why sizeof doesn't work with different types of format specifiers.
I know that sizeof is usually used with the %zu format specifier, but I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%d\n", sizeof(a + d)); // prints normal size of expression
printf("%lf\n", sizeof(s)); // prints big number
printf("%f", sizeof(d)); // prints nan
sizeof evaluates to a value of type size_t. The proper specifier for size_t in C99 is %zu. You can use %u on systems where size_t and unsigned int are the same type or at least have the same size and representation. On 64-bit systems, size_t values have 64 bits and therefore are larger than 32-bit ints. On 64-bit linux and OS/X, this type is defined as unsigned long and on 64-bit Windows as unsigned long long, hence using %lu or %llu on these systems is fine too.
Passing a size_t for an incompatible conversion specification has undefined behavior:
the program could crash (and it probably will if you use %s)
the program could display the expected value (as it might for %d)
the program could produce weird output such as nan for %f or something else...
The reason for this is integers and floating point values are passed in different ways to printf and they have a different representation. Passing an integer where printf expects a double will let printf retrieve the floating point value from registers or memory locations that have random contents. In your case, the floating point register just happens to contain a nan value, but it might contain a different value elsewhere in the program or at a later time, nothing can be expected, the behavior is undefined.
Some legacy systems do not support %zu, notably C runtimes by Microsoft. On these systems, you can use %u or %lu and use a cast to convert the size_t to an unsigned or an unsigned long:
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%u\n", (unsigned)sizeof(a + d)); // should print 8
printf("%lu\n", (unsigned long)sizeof(s)); // should print 4
printf("%llu\n", (unsigned long long)sizeof(d)); // prints 4 or 8 depending on the system
I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
Several reasons.
First of all, printf doesn't know the types of the additional arguments you actually pass to it. It's relying on the format string to tell it the number and types of additional arguments to expect. If you pass a size_t as an additional argument, but tell printf to expect a float, then printf will interpret the bit pattern of the additional argument as a float, not a size_t. Integer and floating point types have radically different representations, so you'll get values you don't expect (including NaN).
Secondly, different types have different sizes. If you pass a 16-bit short as an argument, but tell printf to expect a 64-bit double with %f, then printf is going to look at the extra bytes immediately following that argument. It's not guaranteed that size_t and double have the same sizes, so printf may either be ignoring part of the actual value, or using bytes from memory that isn't part of the value.
Finally, it depends on how arguments are being passed. Some architectures use registers to pass arguments (at least for the first few arguments) rather than the stack, and different registers are used for floats vs. integers, so if you pass an integer and tell it to expect a double with %f, printf may look in the wrong place altogether and print something completely random.
printf is not smart. It relies on you to use the correct conversion specifier for the type of the argument you want to pass.

About conversion specifier in C

I have a question about conversion specifier in C.
In the 5th sentence if I use %lf or %Lf instead of %f, no error occurs. But why does error happen if I use %f?
#include <stdio.h>
int main(void)
{
long double num;
printf("value: ");
scanf("%f",&num); // If I use %lf or %Lf instead of %f, no error occurs.
printf("value: %f \n",num);
}
%f is meant to be used for reading floats, not doubles or long doubles.
%lf is meant to be used for reading doubles.
%Lf is meant to be used for reading long doubles.
If your program works with %lf when the variable type is long double, it's only a coincidence. It works probably because sizeof(double) is the same as sizeof(long double) on your platform. In theory, it is undefined behavior.
In looking at the man page for printf(3) on a FreeBSD system (which is POSIX, by the way), I get the following:
The following length modifier is valid for the a, A, e, E, f, F, g,
or G conversion:
Modifier a, A, e, E, f, F, g, G
l (ell) double (ignored, same behavior as without it)
L long double
I have used the conversions with a 32-bit float data type. But the issue here is that the reason is because different float formats have different sizes, and the printf function needs to know which one it is so it can properly make that conversion. Using %Lf on a float may cause a segmentation error because the conversion is accessing data outside the variable, so you get undefined behavior.
float: 32-bit
double: 64-bit
long double: 80-bit
Now for the long double, the actual size is defined by the platform and the implementation. 80 bits is 10 bytes, but that doesn't exactly fit within a 32-bit alignment without padding. So most implementations use either 96-bits or 128-bits (12 bytes or 16 bytes respectively) to set the alignment.
Be careful here though, just because it might take 128-bits doesn't mean that it is a __float128 (if using gcc or clang). There is at least one platform where specifying long double does mean a __float128 (SunOS, I think), but it is implemented in software and is slow. Furthermore, some compilers (Microsoft and Intel come to mind) long double = double unless you specify a switch on the command line.

Handling numbers in C

Couldnt understand how numbers are handled in C. Could anyone point to a good tutorial.
#include<stdio.h>
main()
{
printf("%f",16.0/3.0);
}
This code gave: 5.333333
But
#include<stdio.h>
main()
{
printf("%d",16.0/3.0);
}
Gave some garbage value: 1431655765
Then
#include<stdio.h>
main()
{
int num;
num=16.0/3.0;
printf("%d",num);
}
Gives: 5
Then
#include<stdio.h>
main()
{
float num;
num=16/3;
printf("%f",num);
}
Gives: 5.000000
printf is declared as
int printf(const char *format, ...);
the first arg (format) is string, and the rest can be anything. How the rest of the arguments will be used depending on the format specifiers in format. If you have:
printf("%d%c", x, y);
x will be treated as int, y will be treated as char.
So,
printf("%f",16.0/3.0);
is ok, since you ask for float double (%f), pass float double(16.0/3.0)
printf("%d",16.0/3.0);
you ask for int(%d), you pass float double (double and int have different internal representation) so, the bit representation of 16.0/3.0 (double) corresponds to bit representation of 1431655765(int).
int num;
num=16.0/3.0;
compiler knows that you are assigning to int, and converts it for you. Note that this is different than the previous case.
Ok, the first 1 is giving correct value as expected.
Second one you are passing a float while it is treating it as an int (hence the "%d" which is for displaying int datatypes, it is a little complicated to explain why and since it appears your just starting I wouldn't worry about why "%d" does this when passed a float) reading it wrong therefore giving you a wierd value. (not a garbage value though).
Third one it makes 16.0/3.0 an int while assigning it to the int datatype which will result in 5. Because while making the float an int it strips the decimals regardless of rounding.
In the fourth the right hand side (16/3) is treated as an int because you don't have the .0 zero at the end. It evaluates that then assigns 5 to float num. Thus explaining the output.
It is because the formatting strings you are choosing do not match the arguments you are passing. I suggest looking at the documentation on printf. If you have "%d" it expects an integer value, how that value is stored is irrelevant and likely machine dependent. If you have a "%f" it expects a floating point number, also likely machine dependent. If you do:
printf( "%f", <<integer>> );
the printf procedure will look for a floating point number where you have given an integer but it doesn't know its and integer it just looks for the appropriate number of bytes and assumes that you have put the correct things there.
16.0/3.0 is a float
int num = 16.0/3.0 is a float converted to an int
16/3 is an int
float num = 16/3 is an int converted to a float
You can search the web for printf documentation. One page is at http://linux.die.net/man/3/printf
You can understand numbers in C by using concept of Implecit Type Conversion.
During Evaluation of any Expression it adheres to very strict rules of type Conversion.
and your answer of expression is depends on this type conversion rules.
If the oparands are of different types ,the 'lower' type is automatically converted into the 'higher' type before the operation proceeds.
the result is of the higher type.
1:
All short and char are automatically converted to int then
2:
if one of the operands is int and the other is float, the int is converted into float because float is higher than an ** int**.
if you want more information about inplicit conversion you have to refer the book Programming in ANSI C by E Balagurusamy.
Thanks.
Bye:DeeP
printf formats a bit of memory into a human readable string. If you specify that the bit of memory should be considered a floating point number, you'll get the correct representation of a floating point number; however, if you specify that the bit of memory should be considered an integer and it is a floating point number, you'll get garbage.
printf("%d",16.0/3.0);
The result of 16.0/3.0 is 5.333333 which is represented in Single precision floating-point format as follows
0 | 10101010 | 10101010101010101010101
If you read it as 32bit integer value, the result would be 1431655765.
num=16.0/3.0;
is equivalent to num = (int)(16.0/3.0). This converts the result of float value(5.33333) to integer(5).
printf("%f",num);
is same as printf("%f",(float)num);

Weird result printing pointers as float in C

I know this is wrong and gcc will give you a warning about it, but why does it work (i.e. the numbers are printed correctly, with some rounding difference)?
int main() {
float *f = (float*) malloc(sizeof(float));
*f = 123.456;
printf("%f\n", *f);
printf("%f\n", f);
return 0;
}
Edit:
Yes, I'm using gcc with a 32-bit machine. I was curious to see what results I'd get with other compilers.
I meddled with things a little more following Christoph's suggestion:
int main() {
float *f = (float*) malloc(sizeof(float));
*f = 123.456;
printf("%f\n", f); // this
printf("%f\n", *f);
printf("%f\n", f); // that
return 0;
}
This results in the first printf printing a value different from the last printf, despite being identical.
Reorder the printf() statements and you'll see it won't work any longer, so GCC definetly doesn't fix anything behind your back.
As to why it works at all: Because of the default argument promotion of variable arguments, you'll actually pass a double with your first call. As pointers on your system seem to be 32bit, the second call only overwrites the lower half of the 64bit floating point value.
In regards to your modified example:
the first call will print a double precision value where the higher 32bits are garbage and the lower the bit value of the pointer f
the second call prints the value of *f promoted to double precision
the third call prints a double precision value with the higher 32bits coming from (double)*f (as these bits still remain on stack from the last call); as in the first case, the lower bits will again come from the pointer f
The numbers aren't printed correctly for me.
Output:
123.456001
0.000000
I'm using VC++ 2009.
printf has no knowledge about actual arguments type. It just analyzes format string and interprets data on stack accordingly.
By coincidence (more or less =)) pointer to float has the same size as float (32 bit) on your platform, so stack is balanced after removing this argument from it.
On other platforms or with other data types this may not work.

Resources