I am new to programming. I was finding a square root of a number using sqrt()function in c. `
scanf("%d", &n);
printf("%d\n", sqrt(n));
}
return 0;
}
When I enter a value of n = 5, I got some negative large number. Can anyone explain, please?
You've produced undefined behavior by passing the wrong type to printf: the %d format required a matching argument of type int but your argument has type double. You need %f (or %e or %g or %a) to print it. Also, there may be other problems, e.g. if you omitted #include <math.h>.
As others have pointed out, the problem here is that the format specifier is wrong. You need to #include <math.h> to get the proper return type of sqrt(), then use a format specifier like %f. Also, turn up your compiler warnings until it tells you something was wrong here. -Wall -Wextra -pedantic -Wno-system-headers is a good choice.
I’m adding an answer, though, to provide historical background on why float variables get promoted to double in printf() argument lists, but not scanf(), since this confused people in the comments.
In the instruction set of the DEC PDP-10 and PDP-11 computers, on which C was originally developed, the float type existed only to save space, and a program needed to convert a float to double to do any calculations on it. In early versions of C, before ANSI function prototypes, all float arguments to a function were promoted to double automatically before being passed (and also char to int). Originally, this ran better at a low level, and also had the advantage of avoiding round-off and overflow error on math using the shorter types. This convention also simplified writing functions that took a varying number of arguments of varying types, such as printf(). The caller could just pass anything in, the compiler would let it, and it was the called function’s job to figure out what the argument list was supposed to be at runtime.
When C added function prototypes, these old rules were kept for backward-compatibility only with legacy function declarations (extern double sqrt() rather than extern double sqrt(double) or the C14 generic equivalent). Since basically nobody writes functions that way any more, this is a historic curiosity—with one exception. A varargs function like int printf(const char*, ...); cannot be written in C with type checking of the variable arguments. (There is a C++14 way to do this using templates.) The standards committee also did not want to break all existing code that printed a float. So those are still promoted according to the old rules.
In scanf(), none of this applies because the storage arguments are passed by reference, and scanf() needs to be sure it’s writing the data in the same type as the variable that holds it. Argument-promotion never comes into play, because only pointers are ever passed.
I meet the same problem. And I want to get an answer with int type. I use forced type conversion, like:
printf("%d\n", (int)sqrt(n));
this is happens because return type and input type of the sqrt() is not specified ,
you can solve this by either including the header file by :
#include<math.h>
or by explicitly specifying the return and input type like this :
double sqrt(double);
and also as mentioned above use correct format specifiers (eg : %f) .
Related
What is the point of format specifier in C if we have allready set the type of variable before printf?
For example:
#include<stdio.h>
int main(void)
{
int a=7
printf("%d", a);
}
Like, it's allready stated what a is, it's integer(int). So what is the point of adding %d to specify that it's an integer?
The answer to this question really only makes sense in the context of C's history.
C is, by now, a pretty old language. Though undoubtedly a "high level language", it is famously low-level as high-level languages go. And its earliest compiler was deliberately and self-consciously small and simple.
In its first incarnation, C did not enforce type safety during function calls. For example, if you called sqrt(144), you got the wrong answer, because sqrt expects an argument of type double, but 144 is an int. It was the programmer's responsibility to call a function with arguments of the correct types: the compiler did not know (did not even attempt to keep track of) the arguments expected by each function, so it did not and could not perform automatic conversions. (A separate program, lint, could check that functions were called with the correct arguments.)
C++ corrected this deficiency, by introducing the function prototype. These were inherited by C in the first ANSI C standard in 1989. However, a function prototype only works for a function that expects a single, fixed argument list, meaning that it can't help for functions that accept a variable number of arguments, the premier example being: printf.
The other thing to remember is that, in C, printf is a more or less ordinary function. ("Ordinary" other than accepting a variable number of arguments, that is.) So the compiler has no direct mechanism to notice the types of the arguments and make that list of types available to printf. printf has no way of knowing, at run time, what types were passed during any given call; it can only rely (it must rely) on the clues provided in the format string. (This is by contrast to languages, many of them, where the print statement is an explicit part of the language parsed by the compiler, meaning that the compiler can do whatever it needs to do in order to treat each argument properly according to its known type.)
So, by the rules of the language (which are constrained by backwards compatibility and the history of the language), the compiler can't do anything special with the arguments in a printf call, other than performing what is called the default argument promotions. So the compiler can't fix things (can't perform the "correct" implicit conversion) if you write something like
int a = 7;
printf("%f", a);
This is, admittedly, an uncomfortable situation. These days, programmers are used to the protections and the implicit promotions provided for by function prototypes. If, these days, you can call
int x = sqrt(144);
and have the right thing happen, why can't you similarly call
printf("%f\n", 144);
Well, you can't, although a good, modern compiler will try to help you out anyway. Although the compiler doesn't have to inspect the format string (because that's printf's job to do, at run time), and the compiler isn't allowed to insert any implicit conversions (other than the default promotions, which don't help here), a compiler can duplicate printf's logic, inspect the format string, and issue strong warnings if the programmer makes a mistake. For example, given
printf("%f\n", 144);
gcc prints "warning: format ‘%f’ expects argument of type ‘double’, but argument 2 has type ‘int", and clang prints "warning: format specifies type 'double' but the argument has type 'int'".
In my opinion, this is a fine compromise, balancing C's legacy behavior with modern expectations.
what is the point of adding %d to specify that it's an integer?
printf() is a function which receives a variable number of arguments of various type after the format argument. It does not directly know the number nor the type of arguments passed nor received.
The callers knows the argument count and types it gives to printf().
To pass the arguments count and type information, the format argument is used by the caller to encodes the argument count and types. printf() uses that format and decodes it to know the argument count and type. It is very important that the format and following arguments passed are consistent.
printf() accepts a variable number of arguments. To process those variable arguments it (va_start()) needs to know the last fixed argument is. It (va_arg()) also needs to know the type of each argument so it figure how much data to read.
The format specifier is also a compact template (or DSL) to express how text and variables should be formatted including field width, alignment, precision, encoding.
I was wondering how can a function issue a compile-time warning?
This came to my mind because when we supply wrong format specifier in the first argument of printf (scanf) for the variable matched with that type specifier and compile with gcc with -Wall option on, compiler issues a warning.
Now, printf and scanf are regularly implemented variadic functions as I understand and I dont know any way to check the value of the string at the compile-time, let alone issue a warning if something doesnt match.
Can someone explain me how I get compiler warning then?
Warnings are implementation (i.e. compiler & C standard library) specific. You could have a compiler giving very few warnings (look into tinycc...), or even none...
I'm focusing on a recent GCC (e.g. 4.9 or 10...) on Linux.
You are getting such warnings, because printf is declared with the appropriate __attribute__ (see GCC function attributes)
(With GCC you can likewise declare your own printf-like functions with the format attribute...)
BTW, a standard conforming compiler is free to implement very specially the <stdio.h> header. So it could process #include <stdio.h> without reading any header file but by changing its internal state.
And you could even add your own function attributes, e.g. by customizing your GCC with your GCC plugin
How can printf issue a compiler warning?
Some compilers analyze the format and other arguments type of printf() and scanf() at compile time.
printf("%ld", 123); // type mis-match `long` vs. `int`
int x;
printf("%ld", &x); // type mis-match 'long *` vs. `int *`
Yet if the format is computed, then that check does not happen as it is a run-time issue.
const char *format = foo();
printf(format, 123); // mis-match? unknowable.
You're absolutely right that it's unusual for a compiler to warn about specific functions.
Warnings about printf (and scanf, and related) format specifiers are quite unusual -- but then, these functions are quite unusual in the first place.
As other answers have explained, it's at least possible for a compiler to "know" about certain functions and to perform special, extra, compile-time checks like this -- and given that printf and scanf and friends are simultaneously very unusual and very popular, it's quite appropriate for compilers to be doing this extra checking, unusual though it is.
Once upon a time (I'm talking about the pre-ANSI, K&R days here), C programmers knew they had to be careful about calling functions with the correct number and type of arguments. (In those days, the only way to automatically check that was to use lint, which some programmers did but many programmers didn't.) And if you were used to being careful, it was easy to be careful about printf and friends, also.
Today, though, it's a different story. ANSI C function prototypes have been in use for a generation. Most programmers today implicitly expect a compiler to automatically convert the types of function arguments, and to complain about incompatible mismatches. (As an example of the way things have changed: in the old days, calling sqrt(144) was an error that quietly gave mysterious results, but today it's fine.)
So today, I have a great deal of sympathy for programmers who are learning C, and are baffled by printf. If you're completely used to the protections afforded to you by function prototypes, it's a pretty great mystery why
int i = 3;
float f = 4.5;
printf("i as a float is %f, f as an int is %d\n", i, f);
doesn't work. Unlike the old days, I suspect, it is very hard to remember that, when you call printf (but pretty much only when you call printf), it's your job to get all the types right, because the compiler won't insert any implicit conversions.
The bottom line is that, today, not only is it possible for a compiler to warn about mismatches in calls to printf and the like, I believe it's pretty much a moral imperative. When we introduced function prototypes, we promised programmers type safety for function arguments, so it's really not fair to quietly withdraw that promise when it comes to printf.
[P.S. Yes, of course I know why function prototypes can't promise complete type safety for varargs functions like printf. But that's got nothing to do with my argument here. Also, yeah, I know, life isn't fair, so call me an old softie with my highfalutin talk of "moral imperatives". :-) ]
I take out the age variable from the printf() call just to see what happens. I then compile it with make. It seems it only throws warning about more % conversions than data arguments and unused age variable but no compile error. I then run the executable file and it does run. Only every time I run it, it returns different random integer. I'm wondering what causes this behavior?
#include <stdio.h>
int main(int argc, char *arg[]) {
int age = 10;
int height = 72;
printf("I'm %d years old\n");
printf("I'm %d inches tall\n", height);
return 0;
}
As per the printf() specification, if there are insufficient number of arguments for the required format specifier, it invokes undefined behavior.
So, your code
printf("I'm %d years old\n");
which is missing the required argument for %d, invokes UB and not guaranteed to produce any valid result.
Cross reference, C11 standard, chapter §7.21.6.1
[..] If there are insufficient arguments for the format, the behavior is
undefined. [..]
According to the C Standard (7.21.6.1 The fprintf function - the same is valid for printf)
...If there are insufficient arguments for the format, the behavior is undefined. If the format is exhausted while arguments
remain, the excess arguments are evaluated (as always) but are
otherwise ignored.
The printf using cdecl, which using stack arguments. If you implied to the function that you are using one argument, it will be pulled out of the runtime stack, and if you didn't put there your number, the place will probably contain some garbage data. So the argument which will be printed is some arbitrary data.
With only one exception I know of, the C Standard imposes no requirements with regard to any action which in some plausible implementations might be usefully trapped. It is not hard to imagine a C compiler passing a variadic function like printf an indication of what arguments it has passed, nor would it be hard to an implementer thinking that it could be useful to have the compiler trigger a trap if code tries to retrieve a variadic parameters of some type when the corresponding argument is some other type or doesn't exist at all. Because it could be useful to have compilers trap in such cases, and because the behavior of such a trap would be outside the jurisdiction of the Standard, the Standard imposes no requirements about what may or may not happen when a variadic function tries to receive arguments which weren't passed to it.
In practice, rather than letting variadic functions know how many arguments they've received, most compilers simply have conventions which describe a relationship between the location of the non-variadic argument and the locations of subsequent variadic arguments. The generated code won't know whether a function has received e.g. two arguments of type int, but it will know that each such argument, if it exists, will be stored in a certain place. On such a compiler, using excess format specifiers will generally result in the generated code looking at the places where additional arguments would have been stored had they existed. In many cases, this location will have been used for some other purpose and then abandoned, and may hold the last value stored there for that purpose, but there is generally no reason to expect anything in particular about the contents of abandoned memory.
I'm dealing with some pre-ANSI C syntax. See I have the following function call in one conditional
BPNN *net;
// Some more code
double val;
// Some more code, and then,
if (evaluate_performance(net, &val, 0)) {
But then the function evaluate_performance was defined as follows (below the function which has the above-mentioned conditional):
evaluate_performance(net, err)
BPNN *net;
double *err;
{
How come evaluate_performance was defined with two parameters but called with three arguments? What does the '0' mean?
And, by the way, I'm pretty sure that it isn't calling some other evaluate_performance defined elsewhere; I've greped through all the files involved and I'm pretty sure the we are supposed to be talking about the same evaluate_performance here.
Thanks!
If you call a function that doesn't have a declared prototype (as is the case here), then the compiler assumes that it takes an arbitrary number and types of arguments and returns an int. Furthermore, char and short arguments are promoted to ints, and floats are promoted to doubles (these are called the default argument promotions).
This is considered bad practice in new C code, for obvious reasons -- if the function doesn't return int, badness could ensure, you prevent the compiler from checking that you're passing the correct number and types of parameters, and arguments might get promoted incorrectly.
C99, the latest edition of the C standard, removes this feature from the language, but in practice many compilers still allow them even when operating in C99 mode, for legacy compatibility.
As for the extra parameters, they are technically undefined behavior according to the C89 standard. But in practice, they will typically just be ignored by the runtime.
The code is incorrect, but in a way that a compiler is not required to diagnose. (A C99 compiler would complain about it.)
Old-style function definitions don't specify the number of arguments a function expects. A call to a function without a visible prototype is assumed to return int and to have the number and type(s) of arguments implied by the calls (with narrow integer types being promoted to int or unsigned int, and float being promoted to double). (C99 removed this; your code is invalid under the C99 standard.)
This applies even if the definition precedes the call (an old-style definition doesn't provide a prototype).
If such a function is called incorrectly, the behavior is undefined. In other words, it's entirely the programmer's responsibility to get the arguments right; the compiler won't diagnose errors.
This obviously isn't an ideal situation; it can lead to lots of undetected errors.
Which is exactly why ANSI added prototypes to the language.
Why are you still dealing with old-style function definitions? Can you update the code to use prototypes?
Even standard C compilers are somewhat permissive when it comes to this. Try running the following:
int foo()
{
printf("here");
}
int main()
{
foo(3,4);
return 0;
}
It will, to some's surprise, output "here". The extra arguments are just ignored. Of course, it depends on the compiler.
Overloading doesn't exist in C so having 2 declarations would not work in the same text.
That must be a quite old compiler to not err on this one or it did not find the declaration of the function yet!
Some compilers would not warn/err when calling an undefined function. That's probably what you're running into. I would suggest you look at the command line flags of the compiler to see whether there is a flag you can use to get these warnings because you may actually find quite a few similar mistakes (too many parameters is likely to work just fine, but too few will make use of "undefined" values...)
Note that it is possible to do such (add extra parameters) when using the ellipsis as in printf():
printf(const char *format, ...);
I would imagine that the function had 3 parameters at some point and the last was removed because it was unused and some parts of the code was not corrected as it ought to be. I would remove that 3rd parameter, just in case the stack goes in the wrong order and thus fails to send the correct parameters to the function.
what shall be the output of: (and why?)
printf("%d",2.37);
Apparently, printf is a variadic function and we can never know the type of a variable argument list. so we always have to specify the format specifiers manually.
so, 2.37 would be stored as double according to IEEE standards would be fetched and printed in integer format.
But the output is 0.
What is the reason?
It is undefined behavior. You're passing a double argument to a function that expects to retrieve an int from its varargs macros, and there's no telling at all what that is going to lead to. In theory, it may even crash (with a calling convention that specifies that variadic arguments of different types are passed in different ways or on different stacks).