here is my snippet of code:
float square_root(x)
float x;
{
.......
}
int main(){
printf("Square_root 2 = %f\n", square_root(4));
}
When I pass number 4.0 to the square_root() function, x parameter inside the function is 4.0000000 so its ok.
But when I pass just 4 (like in example), x variable inside the function becomes 1.976262583365e-323#DEN
Why does that happen?
You're using the old style of function declaration, where the argument types are listed separately. See C function syntax, parameter types declared after parameter list
As you are passing an int, it's not being converted to float by default. Your function, though, is interpreting the argument as a float, which gives undesirable results.
The modern approach is to include the argument type in the function declaration, which will allow your int argument to be automatically converted to a float.
float square_root(float x)
{
.......
}
There is default argument promotion with non-prototype functions and no conversion to the type of the parameter as with prototypes. So basically your int of value 4 is interpreted as a float instead of being converted to a float.
Use a prototyped definition instead:
float square_root(float x)
{
.......
}
to have the argument at the function call converted to a float.
Also note that old-style function definitions are an obsolescent C feature and they should be avoided.
Related
Consider the following piece of code:
void function (char arg)
{
// function's content
}
int main(void)
{
long double x = 1.23456;
function(x);
}
I'm giving the function an argument it is not supposed to get. Why does it not cause an error?
It's converted implicitly.
In the context of an assignment, argument passing, a return statement, and several others, an expression of any arithmetic type is implicitly converted to the type of the target if the target type is also arithmetic. In this case, the double argument is implicitly converted to char. (That particular conversion rarely makes sense, but it's valid as far as the language is concerned.)
Note that this implicit conversion is not done for variadic arguments (for example arguments to print after the format string), because the compiler doesn't know what the target type is. printf("%d\n", 1.5) doesn't convert 1.5 from double to int; rather it has undefined behavior.
There are also some rules for evaluating expressions with operators of different types. I won't go into all the details here, but for example given:
int n = 42;
double x = 123.4;
if you write n + x the value of n is promoted (implicitly converted) from int to double before the addition is performed.
In your example, the double type is implicitly converted to a char.
The following code works:
int main(void)
{
float f = get_float();
int i = round(f*100);
printf("%i\n", i);
}
Yet, error generated if coding this way:
printf("%i\n", round(1.21*100));
Output says round(1.21*100) is float. So, then why
int i = round(f*100);
is fine?
When you do
int i = round(f*100);
you convert the result of the double function round. The converted result is stored in the int variable i, which can be used with the format "%i" as it expects an int argument.
When you pass the double result of round directly as an argument to a format that expects an int you have mismatching format and argument types. That leads to undefined behavior.
No conversion is made in the call to printf, and no conversion can be made since the code inside the printf function doesn't know the actual type of the argument. All it knows is the format "%i". All possible type-information is lost for variable-argument functions.
This is because of the behavior of automatic type casting. In printf, automatic typecasting does not work. When you say %i, it simply expects integer, it can not convert double to integer and then print.
In assignment operation, double is converted to integer first and then it is assigned to left operand of the = operator. I hope this helps.
This is a bit of duplication, but maybe helps for a better understanding:
round() has the following prototype:
double round(double x);
so it returns double.
There is an implicit conversion from double to int in C, so writing
int i = round(f*100);
will convert the result of round() to int.
If you have a function that expects an int, e.g.
void printMyNumber(int number)
{
printf("%i\n", number);
}
you can call it like this:
printMyNumber(round(f*100));
and the implicit conversion works as expected, because the compiler sees both types (the return type from round() and the expected argument type of printMyNumber()).
The reason this doesn't work with printf() is that the prototype of printf() looks like this:
printf(const char *format, ...);
so, except for the first argument, the types of the arguments are unknown. Therefore, whatever you pass is passed without any conversion (except for default argument promotions). Of course, you could use a cast to achieve an explicit conversion instead:
printf("%i\n", (int) round(f*100)); // <- this is fine
At $6.5.2.2.6 the C11 standard:
If the expression that denotes the called function has a type that
does not include a prototype, the integer promotions are performed on
each argument, and arguments that have type float are promoted to
double. These are called the default argument promotions. If the
number of arguments does not equal the number of parameters, the
behavior is undefined. If the function is defined with a type that
includes a prototype, and either the prototype ends with an ellipsis
(, ...) or the types of the arguments after promotion are not
compatible with the types of the parameters, the behavior is
undefined. If the function is defined with a type that does not
include a prototype, and the types of the arguments after promotion
are not compatible with those of the parameters after promotion, the
behavior is undefined, except for the following cases: ...
What does that means - I really can't understand it (especially the first part). From what I can however it means that defining a function like this:
void func(int a, int b, ...)
{
}
And then calling it is undefined behavior which I think is silly.
The situation is as follows: You can declare a function without a parameter list and call this function:
int main(void)
{
extern void f(); // no parameter list!
char c = 'x';
f(c, 1UL, 3.5f);
}
In this situation, the arguments are default-promoted: The first argument is promoted to either int or unsigned int (depending on the platform), the second remains unsigned long, and the third is promoted to double.
When the program is linked, some translation unit needs to contain the definition of the function. The definition always contains a parameter list, even if it's empty (but an empty parameter list in the definition means that the function takes no parameters, unlike in the declaration-that-is-not-a-definition above, where it just means that no information is provided about the parameters):
void f(int, unsigned long, double)
{
// ...
}
The standardese you quoted now says that the behaviour is undefined if the parameter types in this definition are not compatible with the promoted types of the call, or if the parameter list ends with an ellipsis.
As a corollary, it follows that if you want to use a function with variable arguments (using the facilities of <stdarg.h> to access the arguments), you must declare the function with a prototype:
extern void f(int, ...); // prototype (containing ellipsis)
f(c, 1UL, 3.5f);
Now c is converted to int because the first parameter is typed, and the second and third arguments are default-promoted just as before because they are passed as part of the ellipsis. The definition of f must now use the same declaration. If you will, passing arguments in a way that the <stdarg.h> facilities can access may require advance knowledge from the compiler, so you have to provide the parameter list before making the call.
The wording is a bit confusing. The whole paragraph is talking about the case where no prototype has been declared for the function at the time it is called, so the section you highlighted is for the case where no prototype is declared when the function is called, but a prototype is used when the function is defined. Here is an example:
int main(int argc,char** argv)
{
f(3.0f); /* undefined behavior */
g(3.0); /* undefined behavior */
}
int f(float v)
{
return v;
}
int g(double v,...)
{
return v;
}
In this example, when f is called, no prototype has been declared, so 3.0f is promoted to a double. However, the function is later defined with a prototype which takes a float instead of a double, so the behavior is undefined.
Likewise for g, the behavior is undefined because the elipses are used in the prototype of the definition.
I am beginner in C, started after JavaScript and cannot get used to these types.
Okay math.h's sqrt function should work with doubles and as far as I understand it, in C you cannot pass the wrong types as parameters.
But when I go:
int b = sqrt(1234); //it works
so does
float b = sqrt(1234); // it works
int b = sqrt(1234.22) // it works
Why are all these working? What does the process look like here?
My guess is: The parameters of sqrt are automatically converted to double regardless of what I pass,
and the result double is converted to int if the variable I am assigning to is type int.
Then two questions?
1) why do I get an error with other functions if I pass the wrong type but not with sqrt?
2) if we can just convert int to float like this
float b = 123.44
int a = b;
why do we need this?
float b = 123.44
int a = (int) b;
Why are all these [two initializations] work?
The first initialization float b = sqrt(1234) works because the language "upcasts" the integer literal 1234 to double before calling sqrt, and then converting the result to float.
The second initialization int b = sqrt(1234.22) works for the same reason, except this time the compiler does not have to upcast 1234.22 literal before the call, because it is already of type double.
This is discussed in C99 standard:
6.3.1.4.1: When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded.
6.7.8.11: The initializer for a scalar shall be a single expression, optionally enclosed in braces. The initial value of the object is that of the expression (after conversion) (emphasis added).
-
why do we need this [cast int a = (int) b;]?
You may insert a cast for readability, but the standard does not require it (demo).
For the compiler to be able to pass the correct arguments to a function, it needs to be told what types the function expects. This means that you have to provide a full declaration of the function, and it can be done in two ways.
In the case of sqrt() you would typically #include <math.h>. The other way is to declare the function explicitly in your source code: double sqrt (double);.
Once the compiler knows what types the function expected and returns, it will, if possible, convert the arguments to the correct types. int and double can be converted implicitly.
If you fail to declare the types of the parameters for the function, the default argument promotions will be applied to the arguments, which means small integer types will be converted to int, and float will be converted to double. Your int argument will be blindly passed as an int using some implementation-specific method, while the sqrt() function will retrieve its parameter as a double using some other implementation-specific method. This will, obviously, not work properly if the two methods differ, which is why passing the wrong types to a function without a full declaration results in undefined behaviour.
In the last two versions of the C standard, you are not allowed to call a function without a prior declaration, and the compiler is required to emit a diagnostic message. However, for historical reasons, this declaration is not required to provide the types of the parameters.
I have a variadic function which takes a float parameter. Why doesn't it work?
va_arg(arg, float)
Parameters of functions that correspond to ... are promoted before passing to your variadic function. char and short are promoted to int, float is promoted to double, etc.
6.5.2.2.7 The ellipsis notation in a function prototype declarator causes
argument type conversion to stop after the last declared parameter. The default argument
promotions are performed on trailing arguments.
The reason for this is that early versions of C did not have function prototypes; parameter types were declared at the function site but were not known at the call site. But different types are represented differently, and the representation of the passed argument must match the called function's expectation. So that char and short values could be passed to functions with int parameters, or float values could be passed to functions with double parameters, the compiler "promoted" the smaller types to be of the larger type. This behavior is still seen when the type of the parameter is not known at the call site -- namely, for variadic functions or functions declared without a prototype (e.g., int foo();).
As #dasblinkenlight has mentioned, float is promoted to double.
It works fine for me:
#include <stdio.h>
#include <stdarg.h>
void foo(int n, ...)
{
va_list vl;
va_start(vl, n);
int c;
double val;
for(c = 0; c < n; c++) {
val = va_arg(vl, double);
printf("%f\n", val);
}
va_end(vl);
}
int main(void)
{
foo(2, 3.3f, 4.4f);
return 0;
}
Output:
3.300000
4.400000