what shall be the output of: (and why?)
printf("%d",2.37);
Apparently, printf is a variadic function and we can never know the type of a variable argument list. so we always have to specify the format specifiers manually.
so, 2.37 would be stored as double according to IEEE standards would be fetched and printed in integer format.
But the output is 0.
What is the reason?
It is undefined behavior. You're passing a double argument to a function that expects to retrieve an int from its varargs macros, and there's no telling at all what that is going to lead to. In theory, it may even crash (with a calling convention that specifies that variadic arguments of different types are passed in different ways or on different stacks).
Related
What is the point of format specifier in C if we have allready set the type of variable before printf?
For example:
#include<stdio.h>
int main(void)
{
int a=7
printf("%d", a);
}
Like, it's allready stated what a is, it's integer(int). So what is the point of adding %d to specify that it's an integer?
The answer to this question really only makes sense in the context of C's history.
C is, by now, a pretty old language. Though undoubtedly a "high level language", it is famously low-level as high-level languages go. And its earliest compiler was deliberately and self-consciously small and simple.
In its first incarnation, C did not enforce type safety during function calls. For example, if you called sqrt(144), you got the wrong answer, because sqrt expects an argument of type double, but 144 is an int. It was the programmer's responsibility to call a function with arguments of the correct types: the compiler did not know (did not even attempt to keep track of) the arguments expected by each function, so it did not and could not perform automatic conversions. (A separate program, lint, could check that functions were called with the correct arguments.)
C++ corrected this deficiency, by introducing the function prototype. These were inherited by C in the first ANSI C standard in 1989. However, a function prototype only works for a function that expects a single, fixed argument list, meaning that it can't help for functions that accept a variable number of arguments, the premier example being: printf.
The other thing to remember is that, in C, printf is a more or less ordinary function. ("Ordinary" other than accepting a variable number of arguments, that is.) So the compiler has no direct mechanism to notice the types of the arguments and make that list of types available to printf. printf has no way of knowing, at run time, what types were passed during any given call; it can only rely (it must rely) on the clues provided in the format string. (This is by contrast to languages, many of them, where the print statement is an explicit part of the language parsed by the compiler, meaning that the compiler can do whatever it needs to do in order to treat each argument properly according to its known type.)
So, by the rules of the language (which are constrained by backwards compatibility and the history of the language), the compiler can't do anything special with the arguments in a printf call, other than performing what is called the default argument promotions. So the compiler can't fix things (can't perform the "correct" implicit conversion) if you write something like
int a = 7;
printf("%f", a);
This is, admittedly, an uncomfortable situation. These days, programmers are used to the protections and the implicit promotions provided for by function prototypes. If, these days, you can call
int x = sqrt(144);
and have the right thing happen, why can't you similarly call
printf("%f\n", 144);
Well, you can't, although a good, modern compiler will try to help you out anyway. Although the compiler doesn't have to inspect the format string (because that's printf's job to do, at run time), and the compiler isn't allowed to insert any implicit conversions (other than the default promotions, which don't help here), a compiler can duplicate printf's logic, inspect the format string, and issue strong warnings if the programmer makes a mistake. For example, given
printf("%f\n", 144);
gcc prints "warning: format ‘%f’ expects argument of type ‘double’, but argument 2 has type ‘int", and clang prints "warning: format specifies type 'double' but the argument has type 'int'".
In my opinion, this is a fine compromise, balancing C's legacy behavior with modern expectations.
what is the point of adding %d to specify that it's an integer?
printf() is a function which receives a variable number of arguments of various type after the format argument. It does not directly know the number nor the type of arguments passed nor received.
The callers knows the argument count and types it gives to printf().
To pass the arguments count and type information, the format argument is used by the caller to encodes the argument count and types. printf() uses that format and decodes it to know the argument count and type. It is very important that the format and following arguments passed are consistent.
printf() accepts a variable number of arguments. To process those variable arguments it (va_start()) needs to know the last fixed argument is. It (va_arg()) also needs to know the type of each argument so it figure how much data to read.
The format specifier is also a compact template (or DSL) to express how text and variables should be formatted including field width, alignment, precision, encoding.
I am new to programming. I was finding a square root of a number using sqrt()function in c. `
scanf("%d", &n);
printf("%d\n", sqrt(n));
}
return 0;
}
When I enter a value of n = 5, I got some negative large number. Can anyone explain, please?
You've produced undefined behavior by passing the wrong type to printf: the %d format required a matching argument of type int but your argument has type double. You need %f (or %e or %g or %a) to print it. Also, there may be other problems, e.g. if you omitted #include <math.h>.
As others have pointed out, the problem here is that the format specifier is wrong. You need to #include <math.h> to get the proper return type of sqrt(), then use a format specifier like %f. Also, turn up your compiler warnings until it tells you something was wrong here. -Wall -Wextra -pedantic -Wno-system-headers is a good choice.
I’m adding an answer, though, to provide historical background on why float variables get promoted to double in printf() argument lists, but not scanf(), since this confused people in the comments.
In the instruction set of the DEC PDP-10 and PDP-11 computers, on which C was originally developed, the float type existed only to save space, and a program needed to convert a float to double to do any calculations on it. In early versions of C, before ANSI function prototypes, all float arguments to a function were promoted to double automatically before being passed (and also char to int). Originally, this ran better at a low level, and also had the advantage of avoiding round-off and overflow error on math using the shorter types. This convention also simplified writing functions that took a varying number of arguments of varying types, such as printf(). The caller could just pass anything in, the compiler would let it, and it was the called function’s job to figure out what the argument list was supposed to be at runtime.
When C added function prototypes, these old rules were kept for backward-compatibility only with legacy function declarations (extern double sqrt() rather than extern double sqrt(double) or the C14 generic equivalent). Since basically nobody writes functions that way any more, this is a historic curiosity—with one exception. A varargs function like int printf(const char*, ...); cannot be written in C with type checking of the variable arguments. (There is a C++14 way to do this using templates.) The standards committee also did not want to break all existing code that printed a float. So those are still promoted according to the old rules.
In scanf(), none of this applies because the storage arguments are passed by reference, and scanf() needs to be sure it’s writing the data in the same type as the variable that holds it. Argument-promotion never comes into play, because only pointers are ever passed.
I meet the same problem. And I want to get an answer with int type. I use forced type conversion, like:
printf("%d\n", (int)sqrt(n));
this is happens because return type and input type of the sqrt() is not specified ,
you can solve this by either including the header file by :
#include<math.h>
or by explicitly specifying the return and input type like this :
double sqrt(double);
and also as mentioned above use correct format specifiers (eg : %f) .
I take out the age variable from the printf() call just to see what happens. I then compile it with make. It seems it only throws warning about more % conversions than data arguments and unused age variable but no compile error. I then run the executable file and it does run. Only every time I run it, it returns different random integer. I'm wondering what causes this behavior?
#include <stdio.h>
int main(int argc, char *arg[]) {
int age = 10;
int height = 72;
printf("I'm %d years old\n");
printf("I'm %d inches tall\n", height);
return 0;
}
As per the printf() specification, if there are insufficient number of arguments for the required format specifier, it invokes undefined behavior.
So, your code
printf("I'm %d years old\n");
which is missing the required argument for %d, invokes UB and not guaranteed to produce any valid result.
Cross reference, C11 standard, chapter §7.21.6.1
[..] If there are insufficient arguments for the format, the behavior is
undefined. [..]
According to the C Standard (7.21.6.1 The fprintf function - the same is valid for printf)
...If there are insufficient arguments for the format, the behavior is undefined. If the format is exhausted while arguments
remain, the excess arguments are evaluated (as always) but are
otherwise ignored.
The printf using cdecl, which using stack arguments. If you implied to the function that you are using one argument, it will be pulled out of the runtime stack, and if you didn't put there your number, the place will probably contain some garbage data. So the argument which will be printed is some arbitrary data.
With only one exception I know of, the C Standard imposes no requirements with regard to any action which in some plausible implementations might be usefully trapped. It is not hard to imagine a C compiler passing a variadic function like printf an indication of what arguments it has passed, nor would it be hard to an implementer thinking that it could be useful to have the compiler trigger a trap if code tries to retrieve a variadic parameters of some type when the corresponding argument is some other type or doesn't exist at all. Because it could be useful to have compilers trap in such cases, and because the behavior of such a trap would be outside the jurisdiction of the Standard, the Standard imposes no requirements about what may or may not happen when a variadic function tries to receive arguments which weren't passed to it.
In practice, rather than letting variadic functions know how many arguments they've received, most compilers simply have conventions which describe a relationship between the location of the non-variadic argument and the locations of subsequent variadic arguments. The generated code won't know whether a function has received e.g. two arguments of type int, but it will know that each such argument, if it exists, will be stored in a certain place. On such a compiler, using excess format specifiers will generally result in the generated code looking at the places where additional arguments would have been stored had they existed. In many cases, this location will have been used for some other purpose and then abandoned, and may hold the last value stored there for that purpose, but there is generally no reason to expect anything in particular about the contents of abandoned memory.
I am reading http://www.cs.utexas.edu/users/lavender/courses/cs345/lectures/CS345-Lecture-07.pdf to try to understand how does Stack Activation Frame for Variable arguments functions works?
Specifically how can the called function knows how many arguments are being passed?
The slide said:
The va_start procedure computes the fp+offset value following the argument
past the last known argument (e.g., const char format). The rest of the arguments are then computed by calling
va_arg, where the ‘ap’ argument to va_arg is some fp+offset value.*
My question is what is fp (frame point)? how does va_start computes the 'fp+offset' values?
and how does va_arg get 'some fp+offset values? and what does va_end supposed to do with stack?
The function doesn't know how many arguments are passed. At least not in any way that matters, i.e. in C you cannot query for the number of arguments.
That's why all varargs functions must either:
Use a non-varying argument that contains information about the number and types of all variable arguments, like printf() does; or
Use a sentinel value on the end of the variable argument list. I'm not aware of a function in the standard library that does this, but see for instance GTK+'s gtk_list_store_set() function.
Both mechanisms are risky; if your printf() format string doesn't match the arguments, you're going to get undefined behavior. If there was a way for printf() to know the number of passed arguments, it would of course protect against this, but there isn't. So it can't.
The va_start() macro takes as an argument the last non-varying argument, so it can somehow (this is compiler internals, there's no single correct or standard answer, all we can do from this side of the interface is reason from the available data) use that to know where the first varying argument is located on the stack.
The va_arg() macro gets the type as an "argument", which makes it possible to use that to compute the offset, and probably increment some state in the va_list object to point at the next varying argument.
GCC typically yields this warning when the proper header file is not included. This link --> www.network-theory.co.uk/docs/gccintro/gccintro_19.html says that because the function declaration is implicit (rather than explicitly declared via a header) the wrong argument types could actually be passed to the function, yielding incorrect results. I don't understand this. Does this mean the compiler generates code that pushes something, of the machine's word size, onto the stack for the callee to consume, and hopes for the best?
Detail is appreciated.
If the compiler doesn't have specific information about how the argument should be passed, such as when there's no prototype or for arguments that are passed where the prototype have an ellipsis ('...'), the compiler follows certain rules for passing the arguments. These rule basically follow what occurred in pre-standard (or K&R) C - before prototypes were used. Paraphrased from C99 6.5.2.2/6 "Function calls":
* the integer promotions are applied
* if the argument has float type it's promoted to double
After these default argument promotions are applied, the argument is simply copied to wherever the compiler normally copies arguments (generally, the stack). So a struct argument would be copied to the stack.
If the actual function implementation doesn't match how the compiler creates the parameters, then you get undefined behavior (with exceptions for signed/unsigned mismatch if the value can be represented or pointers to char and pointers to void can be mixed/matched).
Also in C90, if the function is implicitly declared (which C99 doesn't permit, though it does permit functions without prototypes), the return value is defaulted as int. Once again, the the actual function returns something else, undefined behavior results.
In classic K&R C, that's pretty much what happened; there were default coercions (anything smaller than (int) was promoted to (int), for example), and for backwards compatibility any function without a prototype is still called that way, but by and large the only indication you got for passing the wrong type was a weird result or maybe a core dump. Which is where you get in trouble, as when the function has a prototype the exact (not coerced/promoted) value is pushed. So if you're passing a (char), if there's a prototype in scope then a single byte is pushed by the caller, otherwise 4 bytes (on most current platforms). If the caller and callee disagree about this, Bad Things happen.