When I execute this code it returns me 1610612736
void main(){
float a=3.3f;
int b=2;
printf("%d",a*b);
}
Why and how to fix this ?
edit : It's not even a matter of integer and float, if i replace int b=2: by float b=2.0f it return the same silly result
The result of the multiplication of a float and an int is a float. Besides that, it will get promoted to double when passing to printf. You need a %a, %e, %f or %g format. The %d format is used to print int types.
Editorial note: The return value of main should be int. Here's a fixed program:
#include <stdio.h>
int main(void)
{
float a = 3.3f;
int b = 2;
printf("%a\n", a * b);
printf("%e\n", a * b);
printf("%f\n", a * b);
printf("%g\n", a * b);
return 0;
}
and its output:
$ ./example
0x1.a66666p+2
6.600000e+00
6.600000
6.6
Alternately, you could also do
printf("%d\n", (int)(a*b));
and this would print the result you're (kind of) expecting.
You should always explicitly typecast the variables to match the format string, otherwise you could see some weird values printed.
Related
I wanted to see the difference in how many digits i get when using float and when using double but i get the same results
#include <stdio.h>
int main()
{
float x=1.2222222222222222f;
printf("%f %d", x,sizeof(x)); // This is what it prints out 1.222222 4
return 0;
}
#include <stdio.h>
int main()
{
double x=1.2222222222222222;
printf("%f %d", x,sizeof(x)); // This is what it prints out 1.222222 8
return 0;
}
It prints out the same value even tho double is obviously double the size and should save more digits. What am i doing wrong?
sizeof returns size_t. To print size_t you need %zu instead of %d
If you want to see the real difference between float and double you need to print more digits using %.NUMBERf
Like:
#include <stdio.h>
int main(void)
{
float x=1.2222222222222222f;
printf("%.70f %zu\n", x,sizeof(x));
double y=1.2222222222222222;
printf("%.70f %zu\n", y,sizeof(y));
return 0;
}
Output:
1.2222222089767456054687500000000000000000000000000000000000000000000000 4
1.2222222222222220988641083749826066195964813232421875000000000000000000 8
It prints out the same value even tho double is obviously double the size and should save more digits.
When passing a float as a ... argument in printf(), it is first promoted to a double. "%f" prints that double to a rounded 6 places after the ..
Since the original values do not differ when rounded to 6 places after the decimal point, they appear the same.
What am i doing wrong?
Expecting the default precision of 6 is insufficient to distinguish.
Easiest to see different with "%a".
printf("%a\n", 1.2222222222222222);
printf("%a\n", 1.2222222222222222f);
0x1.38e38e38e38e3p+0
0x1.38e38ep+0
or with sufficient decimal places in exponential notation.
printf("%.*e\n", DBL_DECIMAL_DIG - 1, 1.2222222222222222);
printf("%.*e\n", DBL_DECIMAL_DIG - 1, 1.2222222222222222f);
1.2222222222222221e+00
1.2222222089767456e+00
C program takes float value as command line argument, so need to format from string to float and then to integer. Using round() from math.h, and then want to cast to int.
Casting by declaring (int) before value, but value type does not change.
Code below:
double *f = (double *) malloc(sizeof(double));
*f = (int)roundf(atof(argv[1]));
printf("Testing f: %d", *f);
make gives this error message:
format specifies type 'int' but the argument has type 'double'
[-Werror,-Wformat]
printf("Testing f: %d", *f);
You're putting int into double. The f variable should be of type int.
int f;
f = (int)roundf(atof(argv[1]));
C has a direct way to round and convert a float to an integer type
long int lround(double x);
The lround and llround functions round their argument to the nearest integer value, rounding halfway cases away from zero, regardless of the current rounding direction. ... C11 7.12.9.7 2
#include <math.h>
#include <stdlib.h>
long i = lround(atof(argv[1]));
// or
float f = atof(argv[1]);
long i = lroundf(f);
// Use %.0f to print a FP with no factional digits.
printf("%.0f\n", f);
// Use %ld to print a `long`
printf("%ld\n", i);
The error is in the %d format specifier in the printf function:
printf("Testing f: %d", *f); /* %d expects an integer but *f is a double */
It doesn't matter that *f contains a rounded number, it is still a double as far as printf is concerned.
Try this:
printf("testing f: %d", (int)*f);
N.B. why you are going to the trouble of using malloc to allocate a single double? If you need to pass just one double to some other program, you could just have:
double f;
... stuff ...
foo(&f);
Using:
float return1(void);
int main()
{
int x;
x = (float)return1();
printf("%f",x);
return 0;
}
float return1()
{
return 1;
}`
Why is the output -0.000000?
Shouldn't x be implicitly cast to a float and print 1.000000?
Shouldn't x be implicitly cast to a float and print 1.000000?
No, it shouldn't because the compiler may not know what printf does or which format will be used to print x.
x has type int, so %d should be used instead of %f to print it.
Why is the output -0.000000?
because of
printf("%f",x);
and x is int, if you want 1.00:, do
printf("%f", (double) x);
or better change it to:
printf("%i", x);
printf doesn't cast and interpret object according to the flag given in format string (%f currently).
depending of what you want
printf("%d", x);
or
printf("%f", (float)x);
or in C++:
std::cout << x; // or float(x)
No need to cast return1() to float; your x is integer and you want to format it to float in printf
float return1(void);
int main(void)
{
int x;
x = return1();
printf("%d", x);
return 0;
}
float return1()
{
return 1;
}
No, it shouldn't. The compiler does generally not check the correspondence between the format string and the arguments you provide. In your case, it would be able to do so, but what if someone passes a string variable as format string? That is the reason for most compilers not checking the correspondence.
I'm working on a Lab assignment for my introduction to C programming class and we're learning about casting.
As part of an exercise I had to write this program and explain the casting that happens in each exercise:
#include <stdio.h>
int main(void)
{
int a = 2, b = 3;
float f = 2.5;
double d = -1.2;
int int_result;
float real_result;
// exercise 1
int_result = a * f;
printf("%d\n", int_result);
// exercise 2
real_result = a * f;
printf("%f\n", real_result);
// exercise 3
real_result = (float) a * b;
printf("%f\n", real_result);
// exercise 4
d = a + b / a * f;
printf("%d\n", d);
// exercise 5
d = f * b / a + a;
printf("%d\n", d);
return 0;
}
I get the following output:
5
5.000000
6.000000
1074921472
1075249152
For the last two outputs, the mathematical operations that are conducted result in float values. Since the variable they're being stored in is of the type double, the cast from float to double shouldn't affect the values, should it? But when I print out the value of d, I get garbage numbers as shown in the output.
Could someone please explain?
But when I print out the value of d, I get garbage numbers as shown in the output.
You are using %d as the format instead of %f or %lf. When the format specifier and the argument type don't match, you get undefined behavior.
%d takes an int (and prints it in decimal format).
%f takes a double.
%lf is either an error (C89) or equivalent to %f (since C99).
programming C using xcode, here's the f
#include <stdio.h>
int multiply (int x, int y) {
return x * y;
}
int main()
{
float result = (float) multiply (0.2, 0.5);
printf("the result is %f", result);
}
I don't get the right value, I get 0.0000 !! I did the casting but I don't know whats wrong.
Your program multiplies 0 by 0.
multiply takes two int parameters, so your 0.2 and 0.5 are implicitly converted to int before making the call. That truncates both to 0.
Your typecast doesn't do anything in this program, since the return value of multiply (which is an int) will get implicitly converted during the assignment to result anyway.
You need to change the definition of multiply (or add a floating-point version and call that) if you want this program to work correctly.
The multiply () input arguments are int:
int multiply (int x, int y) {
and you have passed float as input arguments:
multiply (0.2, 0.5);
Hi there is a basic problem. As the numbers you are multiplying are floats but you are passing these into the function multiply as int's hence being rounded to 1 and 0.
This should work
#include <stdio.h>
int multiply (float x, float y) {
return x * y;
}
int main()
{
float result = (float) multiply (0.2, 0.5);
printf("the result is %f", result);
}