The following code works:
int main(void)
{
float f = get_float();
int i = round(f*100);
printf("%i\n", i);
}
Yet, error generated if coding this way:
printf("%i\n", round(1.21*100));
Output says round(1.21*100) is float. So, then why
int i = round(f*100);
is fine?
When you do
int i = round(f*100);
you convert the result of the double function round. The converted result is stored in the int variable i, which can be used with the format "%i" as it expects an int argument.
When you pass the double result of round directly as an argument to a format that expects an int you have mismatching format and argument types. That leads to undefined behavior.
No conversion is made in the call to printf, and no conversion can be made since the code inside the printf function doesn't know the actual type of the argument. All it knows is the format "%i". All possible type-information is lost for variable-argument functions.
This is because of the behavior of automatic type casting. In printf, automatic typecasting does not work. When you say %i, it simply expects integer, it can not convert double to integer and then print.
In assignment operation, double is converted to integer first and then it is assigned to left operand of the = operator. I hope this helps.
This is a bit of duplication, but maybe helps for a better understanding:
round() has the following prototype:
double round(double x);
so it returns double.
There is an implicit conversion from double to int in C, so writing
int i = round(f*100);
will convert the result of round() to int.
If you have a function that expects an int, e.g.
void printMyNumber(int number)
{
printf("%i\n", number);
}
you can call it like this:
printMyNumber(round(f*100));
and the implicit conversion works as expected, because the compiler sees both types (the return type from round() and the expected argument type of printMyNumber()).
The reason this doesn't work with printf() is that the prototype of printf() looks like this:
printf(const char *format, ...);
so, except for the first argument, the types of the arguments are unknown. Therefore, whatever you pass is passed without any conversion (except for default argument promotions). Of course, you could use a cast to achieve an explicit conversion instead:
printf("%i\n", (int) round(f*100)); // <- this is fine
Related
Consider the following piece of code:
void function (char arg)
{
// function's content
}
int main(void)
{
long double x = 1.23456;
function(x);
}
I'm giving the function an argument it is not supposed to get. Why does it not cause an error?
It's converted implicitly.
In the context of an assignment, argument passing, a return statement, and several others, an expression of any arithmetic type is implicitly converted to the type of the target if the target type is also arithmetic. In this case, the double argument is implicitly converted to char. (That particular conversion rarely makes sense, but it's valid as far as the language is concerned.)
Note that this implicit conversion is not done for variadic arguments (for example arguments to print after the format string), because the compiler doesn't know what the target type is. printf("%d\n", 1.5) doesn't convert 1.5 from double to int; rather it has undefined behavior.
There are also some rules for evaluating expressions with operators of different types. I won't go into all the details here, but for example given:
int n = 42;
double x = 123.4;
if you write n + x the value of n is promoted (implicitly converted) from int to double before the addition is performed.
In your example, the double type is implicitly converted to a char.
I wondering why the compiler let this pass and is giving the right output, although sqrt() from its prototype normally should only get an double value as argument:
In C99 the declaration of the prototype is:
double sqrt (double x);
#include <stdio.h>
#include <math.h>
int main (void)
{
int i = 9;
printf("\t Number \t\t Square Root of Number\n\n");
printf("\t %d \t\t\t %f \n",i, sqrt(i));
}
Output:
Number Square Root of Number
9 3.000000
Why does the compiler not throw a warning at least and the given output is right, if I´m giving the sqrt() function an int as argument?
Is this crossing into Undefined Behavior?
I´m using gcc.
The Question was already asked twice for C++, but not for C, so my question is up for C.
I provide the links to the questions for C++ anyway:
Why does sqrt() work fine on an int variable if it is not defined for an int?
Why is sqrt() working with int argument?
This is not undefined behavior.
The function is defined to accept an argument of type double. Because the type of the argument is known, you can pass an int because it may be implicitly converted to a double. It's the same as if you did:
int i = 4;
double d = i;
The rules for conversion of function arguments are spelled out in section 6.5.2.2p7 of the C standard regarding the function call operator ():
If the expression that denotes the called function has a type that
does include a prototype, the arguments are implicitly converted, as
if by assignment, to the types of the corresponding parameters, taking
the type of each parameter to be the unqualified version of its
declared type. The ellipsis notation in a function prototype
declarator causes argument type conversion to stop after the last
declared parameter. The default argument promotions are performed on
trailing arguments
In contrast, if you passed an int to printf when the format string expects a double, i.e.:
printf("%f\n", 4);
Then you have undefined behavior. This is because the types of the arguments are not known at compile time so the implicit conversion can't happen.
The book says the following on page 45:
Since an argument of a function call is an expression, type conversions also take place when arguments are passed to functions. In the absence of a function prototype, char and short become int, and float becomes double. This is why we have declared function arguments to be int and double even when the function is called with char and float.
I don't understand what the last sentence there is saying. Can someone lead me in the right direction?
We can see that happen here. According to cplusplus.com, this is the declaration of printf():
int printf(const char * format, ...);
The ... means this function can take an unknown number of parameters of unspecified types, and because it is unspecified, the standardization of numeric types to int and double happens to all printf() parameters except the first, that was specified.
Example:
char x = 10;
short y = 100;
int z = 1000;
printf("Values of char is %d, short is %d, and int is %d", x, y, z);
All those integer types are automatically recasted to int when passed to printf(). We can see that as %d works for all of them.
Note that types bigger than double and int are not converted, such as long int, long double, long long etc. Those types are 64-bits.
When you use a prototype for a function in C (ansi C, as original K&R specification didn't define parameters this way) you declare a formal parameter as having a type. When you match it in an actual expression, two things can happen:
The formal parameter and the actual expression are the same type. In this case, every thing is fine and the expression value is used to initialize the parameter prior to call the function.
The formal parameter and the actual expression are not the same type. In that case, the compiler tries to do automatic type conversion if possible from the type of the actual expression to the formal parameter type.
In case no prototype is found, the rules you put above mandate, so chars and shorts get promoted to int values, and al the floating type values get promoted to double.
The last phrase in your quoted paragraph tells you that in some example (not shown) that types are being used for formal parameters to make sure actual expressions get converted to the types of formal parameters.
In my main function, I use the following code
float f = 32.0;
func("test string %f", f);
func (these are all example names) is declared as following
void func(const char *str, ...);
In my implementation of this function, I use a union called all_types to obtain the value of the arguments that are passed
union all_types
{
void *v;
CLObject *obj;
char *s;
long l;
char c;
float f;
int i;
double d;
};
and then give a value to that union like this
union all_types *o = calloc(1, sizeof(union all_types));
while ((o->v = va_arg(list, void *)) != NULL)
Now, when I know the argument is a float, the value for it will be very strange (I set a breakpoint to figure it out). The i and l values on the union will be 32, as they should. However, the f value is some weird number like 0.00000000000000000000000000000000000000000013592595. Does anyone know why I am getting this behavior? This function works for every other type of object I have tested.
The va_arg macro's second argument is the actual type of the actual argument. No conversion takes place as a result of the va_arg invocation. If you don't know the actual type of the actual argument, you're out of luck because there is no way to find out.
Note that default argument conversions do take place in the call itself, so it is impossible to receive a float, char or unsigned short. (The float will be converted to double and the other two to int or unsigned int, depending.)
This is why printf formats make you specify the type of the argument, except for float.
What you are doing invokes undefined behavior, variadic functions will convert floats to double and the undefined behavior comes in because void * is not compatible with double and so you can have no expectation as to the result. We can see this by going to the draft C99 standard section 7.15.1.1 The va_arg macro which says:
[...]If there is no actual next argument, or if type is not compatible with the type of the actual next argument (as promoted according to the default argument promotions), the behavior is undefined,[...]
The correct way to do this would be:
o->d = va_arg(list, double)
and you have the format specifier so this should be possible:
"test string %f"
^^
int main()
{
int x,y;
int z;
char s='a';
x=10;y=4;
z = x/y;
printf("%d\n",s); //97
printf("%f",z); //some odd sequence
return 0;
}
in the above piece of code the char s is automatically converted to int while printing due to the int type in control string, but in the second case the int to float conversion doesn't happen. Why so?
In both cases the second argument is promoted to int. This is how variadic functions work, and has nothing to do with the format string.
The format string is not even looked at by the compiler: it's just an argument to some function. Well, a really helpful compiler might know about printf() and might look at the format string, but only to warn you about mistakes you might have made. In fact, gcc does just that:
t.c:9: warning: format ‘%f’ expects type ‘double’, but argument 2 has type ‘int’
It is ultimately your responsibility to ensure that the variadic arguments match the format string. Since in the second printf() call they don't, the behaviour of the code is undefined.
Functions with variable number of arguments follow the rule of the default argument promotion. Integer promotion rules are applied on arguments of integer types and float arguments are converted to double.
printf("%d\n",s);
sis a char and is converted to int.
printf("%f",z);
z is already an int so no conversion is performed on z
Now the conversion specifier f expects a double but the type of the object after the default argument promotion is an int so it is undefined behavior.
Here is what C says on arguments of library functions with variable number of arguments
(C99, 7.4.1p1) "If an argument to a function has [...] a type (after promotion) not expected by a function with variable number of arguments, the behavior is undefined."
The char is not being promoted to int due to the control string. The char is working as an int because all data that is less than 4 bytes when passed to printf is bumped up to 4 bytes, which is the size of an int, because of the cdecl calling convention of variadic functions (the point of this is so that the data that comes next will be aligned on a 4-byte boundary on the stack).
printf is not type-safe and has no idea what data you really pass it; it blindly reads the control string and extracts a certain number of bytes from the stack based on what sequences it finds, and interprets that set of bytes as the datatype corresponding to the control sequence. It doesn't perform any conversions, and the reason you are getting some wierd printout is because the bits of an int are being interpreted as the bits of a float.
due to the int type in control string
That is incorrect. It is being converted because shorter int types are promoted to int by the var_args process. Int types are not converted to float types because the va/preprocessor doesn't know what formats are expected.