I'm playing around with some code in K&R just for fun but have run into an error which I can't explain myself. I'm modifying code on page 9 in section 1.2, i.e. the temperature conversion program:
#include <stdio.h>
/* converts a range of fahrenheit temperatures to celsius
and displays them in a table*/
int main(int argc, char *argv[]){
float fahr, celsius;
float lower, upper, step;
if(argc != 4){
printf("Usage: ./tempConvert lower upper step\n");
return 1;
}
// note: atof is bad?
lower = atof(argv[1]); // lower limit of temperature
upper = atof(argv[2]); // upper limit of temperature
step = atof(argv[3]); // step size
//printf("%f %f %f",lower, upper, step);
fahr = lower;
printf("F \t C \n");
while(fahr <= upper){
celsius = 5.0*(fahr-32.0)/9.0; // if this were int, 5/9=0 because int division
printf("%3.1f \t %6.1f\n", fahr, celsius);
fahr += step;
}
return 0;
}
When run, I get an infinite loop. However, when I change atof to atoi, it works perfectly fine other than the fact that I wanted float precision instead of just having integers. Printing out the values right after entering them also gives garbage instead of the numbers I entered. What is causing this difference between using atoi and atof to read in numbers?
You didn't include <stdlib.h>, so your compiler assumes that atof() returns an int, but it doesn't.
You aren't compiling with enough warnings enabled! You should insist that the compiler warns you when you call a function for which there is no prototype in scope. Note that C99 mode will warn you if there's no declaration at all for the function, but it still permits non-prototype declarations.
With GCC, I routinely use this (or -std=c11 and the other options):
gcc -g -O3 -std=c99 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes \
-Wold-style-definition -Wold-style-declaration -Werror ...
Your code would not compile under those options.
Related
i am trying to write a function that repeatedly adds 0.001 to 't' and then plugs it into 'y' until 't' reaches 0.3 however the numbers come out wrong, but i've noticed that if i change float to int and change the numbers to integer, the fuction works.. what should i change so the function works properly
#include <stdio.h>
#include <math.h>
void main(void)
{
float t,y,dt;
dt = 0.001;
y = 1;
t = 0;
while (t <= 0.3)
{
y = y + dt*(sin(y)+(t)*(t)*(t));
t = t + dt;
}
printf("y is %d when t is 0.3\n" , y);
return 0;
}
i've noticed that if i change float to int and change the numbers to integer, the fuction works.. what should i change so the function works properly
as said in a remark the problem is the way you (try to) print the value in
printf("y is %d when t is 0.3\n" , y);
%d suppose the corresponding argument is an int and prints it as an int, but y is a float. Note that there is no conversion from float to int in that case because the arguments are managed through a varargs
just do
printf("y is %f when t is 0.3\n" , y);
Also change
void main(void)
to
int main()
After the changes, compilation and execution :
/tmp % gcc -pedantic -Wall -Wextra f.c -lm
/tmp % ./a.out
y is 1.273792 when t is 0.3
Note that all the calculations are done in double, so better to replace float to double to type your vars
(edit) Compiling your initial code with gcc and the option -Wall signals your problems :
/tmp % gcc -Wall f.c -lm
f.c:4: warning: return type of 'main' is not 'int'
f.c: In function 'main':
f.c:18: warning: format '%d' expects type 'int', but argument 2 has type 'double'
f.c:19: warning: 'return' with a value, in function returning void
To use both -Wall and -Wextra is the better option
When I declare a variable as float and subtract two hexadecimal numbers, I keep getting different answer everytime I compile and and run it. Where as the if I declare an integer variable the result stays the same everytime I compile and run the code. I don't understand why storing the result in float changes everytime I compile with the difference of the same two numbers (0xFF0000 - 0xFF7FF)
int main()
{
float BlocksLeft = 0xFF0000 - 0xFF7FF;
int BLeft = 0xFF0000 - 0xFF7FF;
printf("%08x\n", BlocksLeft);
printf("%08x\n", BLeft);
}
The following line is incorrect:
printf("%08x\n", BlocksLeft);
%x format will indicate compiler the argument you give is an int. This lead to undefined behavior. I tried to compile your code and I got:
>gcc -Wall -Wextra -Werror -std=gnu99 -o stackoverflow.exe stackoverflow.c
stackoverflow.c: In function 'main':
stackoverflow.c:15:4: error: format '%x' expects argument of type 'unsigned int', but argument 2 has type 'double' [-Werror=format=]
printf("%08x\n", BlocksLeft);
^
Please, try to compile with stronger warning level, at least -Wall.
You can correct your program this way, for instance:
#include <stdio.h>
int main()
{
float BlocksLeft = 0xFF0000 - 0xFF7FF;
int BLeft = 0xFF0000 - 0xFF7FF;
printf("%08x\n", (int) BlocksLeft); // Works because BlocksLeft's value is non negative
// or
printf("%08x\n", (unsigned int) BlocksLeft);
// or
printf("%.8e\n", BlocksLeft);
printf("%08x\n", BLeft);
}
I'm trying to convince gcc (4.8.1) or clang (3.4) to vectorize the following
code on a ivy bridge processor:
#include "stdlib.h"
#include "math.h"
float sumsqr(float *v, float mean, size_t n) {
float ret = 0;
for(size_t i = 0; i < n; i++) {
ret += pow((v[i] - mean), 2);
}
return ret;
}
and compiling it without success
$ gcc -std=c99 -O3 -march=native -mtune=native -ffast-math -S foo.c
is there a way to modify the code without using instrinsics or modify gcc invocation in order to obtain vectorized code?
The pow function is very general and it may not be visible to the compiler what it does (remember that it can compute things like pow(1.8, -3.19). So it might help to use only builtin operations, and not make function calls:
for(size_t i = 0; i < n; i++)
{
float const x = v[i] - mean;
ret += x * x;
}
First, don't use pow if you don't have to, plain multiplication lets gcc vectorize. Now to explain why you are getting this behavior, notice that replacing pow with powf, gcc manages to vectorize. gcc knows that pow(x,2) is x*x, but the issue here is that pow is a function for double. So the compiler must convert the number v[i]-mean to double, compute the square as a double, add it to ret as a double, and only then convert to float. If at least ret was a double, the compiler could vectorize, but as is, all those conversions make it too complicated and not worth vectorizing.
I wish to generate random numbers between 0 and 1. (Obviously, this has application elsewhere.)
My test code:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main() {
double uR;
srand(1);
for(int i=0;i<5;i++){
uR = rand()/(RAND_MAX+1.000);
printf("%d \n", uR);
}
}
And here's the output after the code is compiled with GCC:
gcc -ansi -std=c99 -o rand randtest.c
./rand
0
-251658240
910163968
352321536
-528482304
Upon inspection, it turns out that casting the integer RAND_MAX to a double has the effect of changing its value from 2147483647 to -4194304. This occurs regardless of the method used to change RAND_MAX to type double; so far, I've tried (double)RAND_MAX and double max = RAND_MAX as well.
Why does the number's value change? How can I stop that from happening?
You can't print a double with %d. If you use %f, it works just fine.
You are printing a double value as a decimal integer - which is causing you confusion.
Use %.6f or something similar.
You are passing a double (uR) to printf when it expects a signed int. You should cast it or print with %f
printf("%d \n", (int)uR);
In C programming language, the floating point constant is double type by default
so 3.1415 is double type, unless use 'f' or 'F' suffix to indicate float type.
I assume const float pi = 3.1415 will cause a warning, but actually not.
when I try these under gcc with -Wall:
float f = 3.1415926;
double d = 3.1415926;
printf("f: %f\n", f);
printf("d: %f\n", d);
f = 3.1415926f;
printf("f: %f\n", f);
int i = 3.1415926;
printf("i: %d\n", i);
the result is:
f: 3.141593
d: 3.141593
f: 3.141593
i: 3
the result (including double variable) obviously lose precision, but compile without any warning.
so what did the compiler do with this? or did I misunderstand something?
-Wall does not enable warnings about loss of precision, truncation of values, etc. because these warnings are annoying noise and "fixing" them requires cluttering correct code with heaps of ugly casts. If you want warnings of this nature you need to enable them explicitly.
Also, your use of printf has nothing to do with the precision of the actual variables, just the precision printf is printing at, which defaults to 6 places after the decimal point.
%f can be used with float and double. If you want more precision use
printf("f: %.16f",d);
And this is what's going on under the hood:
float f = 3.1415926; // The double 3.1415926 is truncated to float
double d = 3.1415926;
printf("f: %f\n", f);
printf("d: %f\n", d);
f = 3.1415926f; // Float is specified
printf("f: %f\n", f);
int i = 3.1415926; // Truncation from double to int
printf("i: %d\n", i);
If you want to get warnings for this, I believe that -Wconversion flags them in mainline gcc-4.3 and later.
If you happen to use OS X, -Wshorten-64-to-32 has been flagging them in Apple's GCC since gcc-4.0.1. I believe that clang matches the mainline gcc behavior, however.