About conversion specifier in C - c

I have a question about conversion specifier in C.
In the 5th sentence if I use %lf or %Lf instead of %f, no error occurs. But why does error happen if I use %f?
#include <stdio.h>
int main(void)
{
long double num;
printf("value: ");
scanf("%f",&num); // If I use %lf or %Lf instead of %f, no error occurs.
printf("value: %f \n",num);
}

%f is meant to be used for reading floats, not doubles or long doubles.
%lf is meant to be used for reading doubles.
%Lf is meant to be used for reading long doubles.
If your program works with %lf when the variable type is long double, it's only a coincidence. It works probably because sizeof(double) is the same as sizeof(long double) on your platform. In theory, it is undefined behavior.

In looking at the man page for printf(3) on a FreeBSD system (which is POSIX, by the way), I get the following:
The following length modifier is valid for the a, A, e, E, f, F, g,
or G conversion:
Modifier a, A, e, E, f, F, g, G
l (ell) double (ignored, same behavior as without it)
L long double
I have used the conversions with a 32-bit float data type. But the issue here is that the reason is because different float formats have different sizes, and the printf function needs to know which one it is so it can properly make that conversion. Using %Lf on a float may cause a segmentation error because the conversion is accessing data outside the variable, so you get undefined behavior.
float: 32-bit
double: 64-bit
long double: 80-bit
Now for the long double, the actual size is defined by the platform and the implementation. 80 bits is 10 bytes, but that doesn't exactly fit within a 32-bit alignment without padding. So most implementations use either 96-bits or 128-bits (12 bytes or 16 bytes respectively) to set the alignment.
Be careful here though, just because it might take 128-bits doesn't mean that it is a __float128 (if using gcc or clang). There is at least one platform where specifying long double does mean a __float128 (SunOS, I think), but it is implemented in software and is slow. Furthermore, some compilers (Microsoft and Intel come to mind) long double = double unless you specify a switch on the command line.

Related

difference between %fl and %lf in C

I am currently learning about C language data type and I try to print a double variable the compiler suggested me to use fl after I type '%', and I got 1 number in the end of precision decimal line. Compare to %lf it will print six precision decimals in total
double num=12322;
printf("%lf",num);// result is 12322.000000
printf("%fl",num);// result is 12322.0000001
I've searched plenty of place but mostly the difference between %f and %lf is frequently asked. Is my situation could possiply the same?
In this call of printf
printf("%lf",num);// result is 12322.000000
the length modifier l in the conversion specifier %lf has no effect.
From the C Standard (7.21.6.1 The fprintf function)
7 The length modifiers and their meanings are:
l (ell) Specifies that a following d, i, o, u, x, or X conversion
specifier applies to a long int or unsigned long int argument; that a
following n conversion specifier applies to a pointer to a long int
argument; that a following c conversion specifier applies to a wint_t
argument; that a following s conversion specifier applies to a pointer
to a wchar_t argument; or has no effect on a following a, A, e, E,
f, F, g, or G conversion specifier.
In this call of printf
printf("%fl",num);// result is 12322.0000001
where in the comment there shall be written the letter 'l' instead of the number 1 as you think
// result is 12322.000000l
^^^
the format string "%fl" means that after outputting an object of the type double due to the conversion specification %f there will be outputted the letter 'l'.
Pay attention to that with the conversion specifier f there can be used one more letter 'l' that is the upper case letter 'L'. In this case the conversion specification %Lf serves to output objects of the type long double.
I think you actually have a typo in your output....
double num=12322;
printf("%lf",num);// result is 12322.000000
printf("%fl",num);// result is 12322.0000001
is actually
double num=12322;
printf("%lf",num);// result is 12322.000000
printf("%fl",num);// result is 12322.000000l
The C standard says that the float is converted to a double when passed to a variadic function, so %lf and %f are equivalent; %fl is the same a %f... with an l after it.
There are two correct ways of printing a value of type double:
printf("%f", num);
or
printf("%lf", num);
These two have exactly the same effect. In this case, the "l" modifier is effectively ignored.
The reason they have the same effect is that printf is special. printf accepts a variable number of arguments, and for such functions, C always applies the default argument promotions. This means that all integer types smaller than int are promoted to int, and float is promoted to double. So if you write
float f = 1.5;
printf("%f\n", f);
then f is promoted to double before being passed. So inside printf, it always gets a double. So %f will never see a float, always a double. So %f is written to expect a double, so it ends up working for both float and double. So you don't need a l modifier to say which. But that's kind of confusing, so the Standard says you can put the l there if you want to — but you don't have to, and it doesn't do anything.
(This is all very different, by the way, from scanf, where %f and %lf are totally different, and must be explicitly matched to arguments of type float * versus double *.)
I have no idea why your IDE complained about (put a red line under) %lf, and I have no idea what it meant by suggesting, as you said,
fl, lg, l, f,
elf, Alf, ls, sf,
if, la, lo, of
Some of those look they might be nonstandard, system-specific extensions, but some (especially fl) are nonsense. So, bottom line, it sounds like your IDE's suggestion was confusing, unnecessary, and quite possibly wrong.

Sizeof with different specificators [duplicate]

This question already has answers here:
Using %f to print an integer variable
(6 answers)
Closed 3 years ago.
I want to know why sizeof doesn't work with different types of format specifiers.
I know that sizeof is usually used with the %zu format specifier, but I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%d\n", sizeof(a + d)); // prints normal size of expression
printf("%lf\n", sizeof(s)); // prints big number
printf("%f", sizeof(d)); // prints nan
sizeof evaluates to a value of type size_t. The proper specifier for size_t in C99 is %zu. You can use %u on systems where size_t and unsigned int are the same type or at least have the same size and representation. On 64-bit systems, size_t values have 64 bits and therefore are larger than 32-bit ints. On 64-bit linux and OS/X, this type is defined as unsigned long and on 64-bit Windows as unsigned long long, hence using %lu or %llu on these systems is fine too.
Passing a size_t for an incompatible conversion specification has undefined behavior:
the program could crash (and it probably will if you use %s)
the program could display the expected value (as it might for %d)
the program could produce weird output such as nan for %f or something else...
The reason for this is integers and floating point values are passed in different ways to printf and they have a different representation. Passing an integer where printf expects a double will let printf retrieve the floating point value from registers or memory locations that have random contents. In your case, the floating point register just happens to contain a nan value, but it might contain a different value elsewhere in the program or at a later time, nothing can be expected, the behavior is undefined.
Some legacy systems do not support %zu, notably C runtimes by Microsoft. On these systems, you can use %u or %lu and use a cast to convert the size_t to an unsigned or an unsigned long:
int a = 0;
long long d = 1000000000000;
int s = a + d;
printf("%u\n", (unsigned)sizeof(a + d)); // should print 8
printf("%lu\n", (unsigned long)sizeof(s)); // should print 4
printf("%llu\n", (unsigned long long)sizeof(d)); // prints 4 or 8 depending on the system
I want to know for my own knowledge what happens behind and why it prints nan when I use it with %f or a long number when used with %lf
Several reasons.
First of all, printf doesn't know the types of the additional arguments you actually pass to it. It's relying on the format string to tell it the number and types of additional arguments to expect. If you pass a size_t as an additional argument, but tell printf to expect a float, then printf will interpret the bit pattern of the additional argument as a float, not a size_t. Integer and floating point types have radically different representations, so you'll get values you don't expect (including NaN).
Secondly, different types have different sizes. If you pass a 16-bit short as an argument, but tell printf to expect a 64-bit double with %f, then printf is going to look at the extra bytes immediately following that argument. It's not guaranteed that size_t and double have the same sizes, so printf may either be ignoring part of the actual value, or using bytes from memory that isn't part of the value.
Finally, it depends on how arguments are being passed. Some architectures use registers to pass arguments (at least for the first few arguments) rather than the stack, and different registers are used for floats vs. integers, so if you pass an integer and tell it to expect a double with %f, printf may look in the wrong place altogether and print something completely random.
printf is not smart. It relies on you to use the correct conversion specifier for the type of the argument you want to pass.

Pointer not giving expected output in c

Why doesn't the double variable show a garbage value?
I know I am playing with pointers, but I meant to. And is there anything wrong with my code? It threw a few warnings because of incompatible pointer assignments.
#include "stdio.h"
double y= 0;
double *dP = &y;
int *iP = dP;
void main()
{
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10lf %#10lf \n",y,*dP,*iP,*(iP+1));
scanf("%lf %d %d",&y,iP,iP+1);
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10d %#10d \n",y,*dP,*iP,*(iP+1));
}
Welcome to Stack Overflow. It's not very clear what you're trying to do with this code, but the first thing I'll say is that it does exactly what it says it does. It tries to format data with the wrong format string. The result is garbage, but that doesn't necessarily mean it will look like garbage.
If part of the idea is to print out the internal bit pattern of a double in hexadecimal, you can do that--but the code will be implementation-dependent. The following should work on just about any modern 32 or 64-bit desktop implementation using 64-bits for both double and long long int types:
double d = 3.141592653589793238;
printf("d = %g = 0x%016llX\n", d, *(long long*)&d);
The %g specification is a quick way to print out a double in (usually) easily readable form. The %llX format prints an unsigned long long int in hexadecimal. The byte order is implementation-dependent; even if you know that both double and long long int have the same number of bits. On a Mac, PC or other Intel/AMD architecture machine, you'll get the display in most-significant-digit-first order.
The *(long long *)&d expression (reading from right to left) will take the address of d, convert that double* pointer to a long long * pointer, then dereference that pointer to get a long long value to format.
Almost every implementation uses IEEE 754 format for hardware floating point this century.
64-bit IEEE format (aka double)
You can find out more about printf formatting at:
http://www.cplusplus.com/reference/cstdio/printf/

How to use float.h macros to enhance the floating point precision

As I understood from this answer, there is a way to extend the precision using float.h via the macro LDBL_MANT_DIG. My goal is to enhance the floating point precision of double values so that I can store a more accurate number, e.g., 0.000000000566666 instead of 0.000000. Kindly, can someone give a short example of to use this macro so that I can extend the precision stored in the buffer?
Your comment about wanting to store more accurate numbers so you don't get just 0.000000 suggests that the problem is not in the storage but in the way you're printing the numbers. Consider the following code:
#include <stdio.h>
int main(void)
{
float f = 0.000000000566666F;
double d = 0.000000000566666;
long double l = 0.000000000566666L;
printf("%f %16.16f %13.6e\n", f, f, f);
printf("%f %16.16f %13.6e\n", d, d, d);
printf("%lf %16.16lf %13.6le\n", d, d, d);
printf("%Lf %16.16Lf %13.6Le\n", l, l, l);
return 0;
}
When run, it produces:
0.000000 0.0000000005666660 5.666660e-10
0.000000 0.0000000005666660 5.666660e-10
0.000000 0.0000000005666660 5.666660e-10
0.000000 0.0000000005666660 5.666660e-10
As you can see, using the default "%f" format prints 6 decimal places, which treats the value as 0.0. However, as the format with more precision shows, the value is stored correctly and can be displayed with more decimal places, or with the %e format, or indeed with the %g format though the code doesn't show that in use — the output would be the same as the %e format in this example.
The %f conversion specification, as opposed to %lf or %Lf, says 'print a double'. Note that when float values are passed to printf(), they are automatically converted to double (just as numeric types shorter than int are promoted to int). Therefore, %f can be used for both float and double types, and indeed the %lf format (which was defined in C99 — everything else was defined in C90) can be used to format float or double values. The %Lf format expects a long double.
There isn't a way to store more precision in a float or double simply by using any of the macros from <float.h>. Those are more descriptions of the characteristics of the floating-point types and the way that they behave than anything else.
The answer you cited only mentions that the macro is equal to the number of precision digits that you can store. It cannot in any way increase precision. But the macro is for "long doubles", not doubles. You can use the long double type if you need more precision than the double type:
long double x = 3.14L;
Notice the "L" after the number for specifying a long double literal.
Floating-point types are implemented in hardware. The precision is standardized across the industry and baked into the circuits of the CPU. There's no way to increase it beyond long double except an extended-precision software library such as GMP.
The good news is that floating-point numbers don't get bogged down in leading zeroes. 0.000000000566666 won't round to zero. With only six digits, you only even need a single-precision float to represent it well.
There is an issue with math.h (not float.h), where the POSIX standard fails to provide π and e with long double precision. There are a couple workarounds: GNU defines e.g. M_PIl and M_El, or you can also use the preprocessor to paste an l onto such literal constants in another library (giving the number long double type) and hope for spare digits.

scanf not working. need to read double from console

I'm not sure what I'm doing wrong, but I'm not able to read a double from the console. Reading an it works fine for some reason. I'm using Xcode.
double n1;
// get input from the user
printf("Enter first number: ");
scanf("%f", &n1);
printf("%f", n1);
This will always print 0 no matter what I enter.
%f is looking for a float, not a double. If you want to use a double, use the format %lf.
As a somewhat interesting aside, clang warns about this without any extra flags, gcc 4.6 won't warn about it even with -Wall -Wextra -pedantic.
%f is meant for a single precision floating-point value (float). The format specifier you need is %lf, meaning long precision floating-point value (double).
Its all about how data is stored in memory.
Let me first tell that how long and float are stored in memory.
A double (long float, 64 bits) is stored in memory like or this (little endian notation).
Where as a float (32 bits) is stored like this (little endian notation).
Also have a look at this "en.wikipedia.org/wiki/Floating_point#Internal_representation" (all floating data types)
So here you are asking to take input as %f (i.e. float, which is 4 bytes but double is 8 bytes), so compiler takes input from console and converts it to float type and stores it at memory location(which is actually 8 bytes) of variable (here n1).

Resources