RAND_MAX exhibiting unusual behavior upon recasting - c

I wish to generate random numbers between 0 and 1. (Obviously, this has application elsewhere.)
My test code:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main() {
double uR;
srand(1);
for(int i=0;i<5;i++){
uR = rand()/(RAND_MAX+1.000);
printf("%d \n", uR);
}
}
And here's the output after the code is compiled with GCC:
gcc -ansi -std=c99 -o rand randtest.c
./rand
0
-251658240
910163968
352321536
-528482304
Upon inspection, it turns out that casting the integer RAND_MAX to a double has the effect of changing its value from 2147483647 to -4194304. This occurs regardless of the method used to change RAND_MAX to type double; so far, I've tried (double)RAND_MAX and double max = RAND_MAX as well.
Why does the number's value change? How can I stop that from happening?

You can't print a double with %d. If you use %f, it works just fine.

You are printing a double value as a decimal integer - which is causing you confusion.
Use %.6f or something similar.

You are passing a double (uR) to printf when it expects a signed int. You should cast it or print with %f
printf("%d \n", (int)uR);

Related

Why tanh function from math.h library give wrong result?

#include<stdio.h>
#include<math.h>
#define PI 2*acos(0.0)
int main(void)
{
double theta;
theta=tanh(1/(sqrt(3.0)));
printf("With tanh function = %lf\n",theta);
printf("Actual value = %lf\n",PI/6.0);
return 0;
}
Output:
With tanh function = 0.520737
Actual value = 0.523599
Why are these two values different? It should me same as my understanding.
You've got that identity completely wrong.
The actual identity is
tanh-1(i ⁄ √3) = πi ⁄ 6 (where i is the imaginary unit, √-1)
C11 can easily validate that:
#define _XOPEN_SOURCE 700
#include<stdio.h>
#include<math.h>
#include<complex.h>
int main(void)
{
complex double theta=catanh(I/sqrt(3.0));
printf("With atanh function = %lf\n",cimag(theta));
printf("Actual value = %lf\n",M_PI/6);
return 0;
}
(Live on coliru: http://coliru.stacked-crooked.com/a/f3df5358a2be67cd):
With atanh function = 0.523599
Actual value = 0.523599
M_PI will be in math.h in any Posix compliant system. Apparently, on Windows you use
#define _USE_MATH_DEFINES
but I have no idea whether Visual Studio supports complex.h.
Your program has a couple of minor flaws, but none that cause it to misbehave.
Your PI macro should be parenthesized:
#define PI (2*acos(0.0))
but you happen to get away without the parentheses because of the way you use it.
The correct format for printing a double value is actually %f, but %lf is accepted as well. (%Lf is for long double. %f also works for float, because float arguments to variadic functions are promoted to double). This also doesn't affect your program's behavior.
In fact, your program is working correctly. I've confirmed using an HP 42S emulator that tanh(1/(sqrt(3.0))) is approximately 0.520737 (I get 0.520736883716).
The problem is your assumption that the result should be π/6.0. It isn't.

Calculating with a float in macros in C

My colleague and I are studying for a test, where we have to analyze C Code. Looking through the tests from the previous years, we saw the following code, which we don't really understand:
#include <stdio.h>
#define SUM(a,b) a + b
#define HALF(a) a / 2
int main(int argc, char *argv[])
{
int big = 6;
float small = 3.0;
printf("The average is %d\n", HALF(SUM(big, small)));
return 0;
}
This code prints 0, which we don't understand at all... Can you explain this to us?
Thanks so much in advance!
The compiler's warnings (format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘double’) give more-than-enough information. You need to correct your format-specifier, which should be %lf, instead of %d, since you are trying to print a double value.
printf("The average is %lf\n", HALF(SUM(big, small)));
printf will treat the memory you point as however you tell it to. Here, it is treats the memory that represents the float as an int. Because the two are stored differently, you should get what is essentially a random number. It needs not be 0 always.
To get correct output
Add parentheses in macro
Use correct format specifier (%f)
Corrected Code
#include <stdio.h>
#define SUM(a, b) (a + b)
#define HALF(a) a / 2
int main() {
int big = 6;
float small = 3.0;
printf("The average is %f\n", HALF(SUM(big, small)));
return 0;
}
Output
The average is 4.500000
If you don't add parentheses, output will be 7.500000 due to operator precedence.
In case you need integer output, cast to int before printing.
printf("The average is %d\n", (int)HALF(SUM(big, small)));
Output
The average is 4

C math not respecting declared constants

Stackoverflow,
I'm trying to write a (very) simple program that will be used to show how machine precision and flops effect functions around their root. My code is as follows:
#include <stdio.h>
#include <math.h>
int main(){
const float x = 2.2;
float sum = 0.0;
sum = pow(x,9) - 18*pow(x,8) + 144*pow(x,7) - 672*pow(x,6) + 2016*pow(x,5) -
4032*pow(x,4) + 5376*pow(x,3) - 4608*pow(x,2) + 2304*x - 512;
printf("sum = %d", sum);
printf("\n----------\n");
printf("x = %d", x);
return 0;
}
But I keep getting that sum is equal to zero. At first I thought that maybe my machine wasn't respecting the level of percision, but after printing x I discovered that the value of x is changing each time I run the program and is always huge (abs(x) > 1e6)
I have it declared as a constant so I'm even more confused as to whats going on...
FYI I'm compiling with gcc -lm
printf("sum = %d", sum);
sum is a float, not an int. You should use %f instead of %d. Same here:
printf("x = %d", x);
Reading about printf() format specifiers may be a good idea.

Floating point number generation in C on Montavista on PPC

I have the following simple program to generate floating point random numbers between 1 and 4:
#include<stdio.h>
#include <stdlib.h>
#include <time.h>
main()
{
int i = 0;
float u;
srand((unsigned)time(NULL));
for(i = 0;i< 10000 ; i++)
{
u = ((4-1)*((float)rand()/RAND_MAX))+1;
printf("The random value for iteration = %d is %2.4f \n", i, u);
}
}
It successfully generates floating point random numbers between 1 and 4 on an x86 Red Hat Linux machine. But the same program produces 0.0000 as random number on a PPC running Montavista Linux.
Could someone explain why and how to make this work on the PPC Montavista ?
A hunch is that you should be using double instead of float or printing (double)u, since %f takes a double. I was under the impression that floats were automatically promoted to double when passed to a vararg function though.
You could also try printing (int)(u*10000).

GCC compile time division error

Can someone explain this behaviour?
test.c:
#include <stdio.h>
int main(void)
{
printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
printf("%f, %f\n", (300.6000/0.05000), (300.65000/0.05000));
return 0;
}
$ gcc test.c
$ ./a.out
6012, 6012
6012.000000, 6013.000000
I checked the assembly code and it puts both the arguments of the first printf as 6012, so it seems to be a compile time bug.
Run
#include <stdio.h>
int main(void)
{
printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
printf("%.20f %.20f\n", (300.6000/0.05000), (300.65000/0.05000));
return 0;
}
and it should be more clear. The value of the second one (after floating point division, which is not exact) is ~6012.9999999999991, so when you truncate it with (int), gcc is smart enough to put in 6012 at compile time.
When you print the floats, printf by default formats them for display with only 6 digits of precision, which means the second prints as 6013.000000.
printf() rounds floating point numbers when you print them. If you add more precision you can see what's happening:
$ cat gccfloat.c
#include <stdio.h>
int main(void)
{
printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
printf("%.15f, %.15f\n", (300.6000/0.05000), (300.65000/0.05000));
return 0;
}
$ ./gccfloat
6012, 6012
6012.000000000000000, 6012.999999999999091
Sounds like a rounding error. 300.65000/0.05000 is being calculated (floating point) as something like 6012.99999999. When casting as an int, it gets truncated to 6012. Of course this is all being precalculated in the compiler optimizations, so the final binary just contains the value 6012, which is what you're seeing.
The reason you don't see the same in your second statement is because it's being rounded for display by printf, and not truncated, as is what happens when you cast to int. (See #John Kugelman's answer.)

Resources