I'm not sure what I'm doing wrong, but I'm not able to read a double from the console. Reading an it works fine for some reason. I'm using Xcode.
double n1;
// get input from the user
printf("Enter first number: ");
scanf("%f", &n1);
printf("%f", n1);
This will always print 0 no matter what I enter.
%f is looking for a float, not a double. If you want to use a double, use the format %lf.
As a somewhat interesting aside, clang warns about this without any extra flags, gcc 4.6 won't warn about it even with -Wall -Wextra -pedantic.
%f is meant for a single precision floating-point value (float). The format specifier you need is %lf, meaning long precision floating-point value (double).
Its all about how data is stored in memory.
Let me first tell that how long and float are stored in memory.
A double (long float, 64 bits) is stored in memory like or this (little endian notation).
Where as a float (32 bits) is stored like this (little endian notation).
Also have a look at this "en.wikipedia.org/wiki/Floating_point#Internal_representation" (all floating data types)
So here you are asking to take input as %f (i.e. float, which is 4 bytes but double is 8 bytes), so compiler takes input from console and converts it to float type and stores it at memory location(which is actually 8 bytes) of variable (here n1).
Related
I have a question while studying C language.
printf("%d, %f \n", 120, 125.12);
The output value in this code is:
120, 125.120000
That's it. And
printf("%d, %f \n", 120.1, 125.12);
The output value of this code is:
1717986918, 0.000000
That's it.
Of course, I understand that the output value of %d in front of us was a strange value because I put a mistake in the integer format character. But I don't know why %f is 0.000000 after that. Does the character on the back have an effect on the value of the previous character?
I'm asking because I'm curious because it keeps coming out like this even if I put in a different price.
printf is a variadic function, meaning its argument types and counts are variable. As such, they must be extracted according to their type. Extraction of a given argument is dependent on the proper extraction of the arguments that precede it. Different argument types have different sizes, alignments, etc.
If you use an incorrect format specifier for a printf argument, then it throws printf out of sync. The arguments after the one that was incorrectly extracted will not, in general, be extracted correctly.
In your example, it probably extracted only a portion of the first double argument that was passed, since it was expecting an int argument. After that it was out of sync and improperly extracted the second double argument. This is speculative though, and varies from one architecture to another. The behavior is undefined.
why %f is 0.000000 after that.
According to the C standard because %d expects an int and you passed a double, undefined behavior happens and anything is allowed to happen.
Anyway:
I used http://www.binaryconvert.com/convert_double.html to convert doubles to bits:
120.1 === 0x405E066666666666
125.12 === 0x405F47AE147AE148
I guess that your platform is x86-ish, little endian, with ILP32 data model.
The %d printf format specifier is for int and sizeof(int) is 4 bytes or 32 bits in LP64. Because the architecture is little endian and the stack too, the first 4 bytes from the stack are 0x66666666 and the decimal value of it is printed out which is 1717986918.
On the stack are left three 32-bit little endian words, let's split them 0x405E0666 0x147AE148 0x405F47AE.
The %f printf format specifier is for double and sizeof(double) on your platform is 8 or 64 bits. We already read 4 bytes from the stack, and now we read 8 bytes. Remembering about the endianess, we read 0x147AE148405E0666 that is converted to double. Again I used this site and converted the hex to a double, it resulted in 5.11013578783948654276686949883E-210. E-210 - this is a very, very small number. Because %f by default prints with 6 digits of precision, the initial 7 digits of the number are 0, so only zeros are printed. If you would use %g printf format specifier, then the number would be printed as 5.11014e-210.
This can be attributed to the fact that the printf function is not type-safe, i.e. there is no connection between the specifiers in the format string and the passed arguments.
The function converts all integer arguments to int (4 bytes) and all floats to double (8 bytes). As the types are guessed from the format string, a shift in the data occurs. It is likely that the 0.000000 appears because what is loaded corresponds to a floating-point zero (biaised exponent = 0).
Try with 1.0, 1.e34.
This question already has answers here:
Using %f to print an integer variable
(6 answers)
Closed 3 years ago.
I want to know why sizeof doesn't work with different types of format specifiers.
I know that sizeof is usually used with the %zu format specifier, but I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%d\n", sizeof(a + d)); // prints normal size of expression
printf("%lf\n", sizeof(s)); // prints big number
printf("%f", sizeof(d)); // prints nan
sizeof evaluates to a value of type size_t. The proper specifier for size_t in C99 is %zu. You can use %u on systems where size_t and unsigned int are the same type or at least have the same size and representation. On 64-bit systems, size_t values have 64 bits and therefore are larger than 32-bit ints. On 64-bit linux and OS/X, this type is defined as unsigned long and on 64-bit Windows as unsigned long long, hence using %lu or %llu on these systems is fine too.
Passing a size_t for an incompatible conversion specification has undefined behavior:
the program could crash (and it probably will if you use %s)
the program could display the expected value (as it might for %d)
the program could produce weird output such as nan for %f or something else...
The reason for this is integers and floating point values are passed in different ways to printf and they have a different representation. Passing an integer where printf expects a double will let printf retrieve the floating point value from registers or memory locations that have random contents. In your case, the floating point register just happens to contain a nan value, but it might contain a different value elsewhere in the program or at a later time, nothing can be expected, the behavior is undefined.
Some legacy systems do not support %zu, notably C runtimes by Microsoft. On these systems, you can use %u or %lu and use a cast to convert the size_t to an unsigned or an unsigned long:
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%u\n", (unsigned)sizeof(a + d)); // should print 8
printf("%lu\n", (unsigned long)sizeof(s)); // should print 4
printf("%llu\n", (unsigned long long)sizeof(d)); // prints 4 or 8 depending on the system
I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
Several reasons.
First of all, printf doesn't know the types of the additional arguments you actually pass to it. It's relying on the format string to tell it the number and types of additional arguments to expect. If you pass a size_t as an additional argument, but tell printf to expect a float, then printf will interpret the bit pattern of the additional argument as a float, not a size_t. Integer and floating point types have radically different representations, so you'll get values you don't expect (including NaN).
Secondly, different types have different sizes. If you pass a 16-bit short as an argument, but tell printf to expect a 64-bit double with %f, then printf is going to look at the extra bytes immediately following that argument. It's not guaranteed that size_t and double have the same sizes, so printf may either be ignoring part of the actual value, or using bytes from memory that isn't part of the value.
Finally, it depends on how arguments are being passed. Some architectures use registers to pass arguments (at least for the first few arguments) rather than the stack, and different registers are used for floats vs. integers, so if you pass an integer and tell it to expect a double with %f, printf may look in the wrong place altogether and print something completely random.
printf is not smart. It relies on you to use the correct conversion specifier for the type of the argument you want to pass.
I know that by default in C when you declare a float it gets automatically saved as a double and that if you want it to be saved as a float you have to declare it like this
float x = 0.11f
but what if my x value comes from a scanf? How can I do so that when I print it it doesn't get rounded down or up?
Here's my code btw, thanks for the help.
#include <stdio.h>
int main() {
float number = 0;
float comparison;
do{
printf("\nEnter a number: ");
scanf("%f", &comparison);
if(comparison > number) {
number = comparison;
}
}while(comparison > 0);
printf("The largest number enteres was: %f\n\n", number);
}
what if my x value comes from a scanf? How can I do so that when I print it it doesn't get rounded down or up?
scanf with an %f directive will read the input and convert it to a float (not a double). If the matched text does not correspond to a number exactly representable as a float then there will be rounding at this stage. There is no alternative.
When you pass an argument of type float to printf() for printing, it will be promoted to type double. This is required by the signature of that function. But type double can exactly represent all values of type float, so this promotion does not involve any rounding. printf's handling of the %f directives is aligned with this automatic promotion: the corresponding (promoted) argument is expected to be of type double.
There are multiple avenues to reproducing the input exactly, depending on what constraints you are willing to put on that input. The most general is to read, store, and print the data as a string, though even this has its complications.
If you are willing to place a limit on the maximum decimal range and precision for which verbatim reproduction is supported, then you may be able to get output rounded to the same representation as the input by specifying a precision in your printf field directives:
float f;
scanf("%f", &f);
printf("%f %.2f %5.2f\n", f, f, f);
If you want to use a built-in floating-point format and also avoid trailing zeroes being appended then either an explicit precision like that or a %g directive is probably needed:
printf("%f %g\n", f, f);
Other alternatives are more involved, such as creating a fixed-point or arbitrary-precision decimal data type, along with appropriate functions for reading and writing it. I presume that goes beyond what you're presently interested in doing.
Note: "double" is short for "double precision", as opposed to notionally single-precision "float". The former is the larger type in terms of storage and representational capability. In real-world implementations, there is never any "rounding down" from float to double.
I have a question about conversion specifier in C.
In the 5th sentence if I use %lf or %Lf instead of %f, no error occurs. But why does error happen if I use %f?
#include <stdio.h>
int main(void)
{
long double num;
printf("value: ");
scanf("%f",&num); // If I use %lf or %Lf instead of %f, no error occurs.
printf("value: %f \n",num);
}
%f is meant to be used for reading floats, not doubles or long doubles.
%lf is meant to be used for reading doubles.
%Lf is meant to be used for reading long doubles.
If your program works with %lf when the variable type is long double, it's only a coincidence. It works probably because sizeof(double) is the same as sizeof(long double) on your platform. In theory, it is undefined behavior.
In looking at the man page for printf(3) on a FreeBSD system (which is POSIX, by the way), I get the following:
The following length modifier is valid for the a, A, e, E, f, F, g,
or G conversion:
Modifier a, A, e, E, f, F, g, G
l (ell) double (ignored, same behavior as without it)
L long double
I have used the conversions with a 32-bit float data type. But the issue here is that the reason is because different float formats have different sizes, and the printf function needs to know which one it is so it can properly make that conversion. Using %Lf on a float may cause a segmentation error because the conversion is accessing data outside the variable, so you get undefined behavior.
float: 32-bit
double: 64-bit
long double: 80-bit
Now for the long double, the actual size is defined by the platform and the implementation. 80 bits is 10 bytes, but that doesn't exactly fit within a 32-bit alignment without padding. So most implementations use either 96-bits or 128-bits (12 bytes or 16 bytes respectively) to set the alignment.
Be careful here though, just because it might take 128-bits doesn't mean that it is a __float128 (if using gcc or clang). There is at least one platform where specifying long double does mean a __float128 (SunOS, I think), but it is implemented in software and is slow. Furthermore, some compilers (Microsoft and Intel come to mind) long double = double unless you specify a switch on the command line.
#include<stdio.h>
main()
{
float x=2;
float y=4;
printf("\n%d\n%f",x/y,x/y);
printf("\n%f\n%d",x/y,x/y);
}
Output:
0
0.000000
0.500000
0
compiled with gcc 4.4.3
The program exited with error code 12
As noted in other answers, this is because of the mismatch between the format string and the type of the argument.
I'll guess that you're using x86 here (based on the observed results).
The arguments are passed on the stack, and x/y, although of type float, will be passed as a double to a varargs function (due to type "promotion" rules).
An int is a 32-bit value, and a double is a 64-bit value.
In both cases you are passing x/y (= 0.5) twice. The representation of this value, as a 64-bit double, is 0x3fe0000000000000. As a pair of 32-bit words, it's stored as 0x00000000 (least significant 32 bits) followed by 0x3fe00000 (most significant 32-bits). So the arguments on the stack, as seen by printf(), look like this:
0x3fe00000
0x00000000
0x3fe00000
0x00000000 <-- stack pointer
In the first of your two cases, the %d causes the first 32-bit value, 0x00000000, to be popped and printed. The %f pops the next two 32-bit values, 0x3fe00000 (least significant 32 bits of 64 bit double), followed by 0x00000000 (most significant). The resulting 64-bit value of 0x000000003fe00000, interpreted as a double, is a very small number. (If you change the %f in the format string to %g you'll see that it's almost 0, but not quite).
In the second case, the %f correctly pops the first double, and the %d pops the 0x00000000 half of the second double, so it appears to work.
When you say %d in the printf format string, you must pass an int value as the corresponding argument. Otherwise the behavior is undefined, meaning that your computer may crash or aliens might knock at your door. Similar for %f and double.
Yes. Arguments are read from the vararg list to printf in the same order that format specifiers are read.
Both printf statements are invalid because you're using a format specifier expecting a int, but you're only giving it a floatdouble.
What you are doing is undefiend behaviour. What you are seeing is coincidental; printf could write anything.
You must match the exact type when giving printf arguments. You can e.g. cast:
printf("\n%d\n%f", (int)(x/y), x/y);
printf("\n%f\n%d", x/y, (int)(x/y));
This result is not surprising, in the first %d you passed a double where an integer was expected.
http://en.wikipedia.org/wiki/Format_string_attack
Something related to my question. Supports the answer of Matthew.