Floating point number generation in C on Montavista on PPC - c

I have the following simple program to generate floating point random numbers between 1 and 4:
#include<stdio.h>
#include <stdlib.h>
#include <time.h>
main()
{
int i = 0;
float u;
srand((unsigned)time(NULL));
for(i = 0;i< 10000 ; i++)
{
u = ((4-1)*((float)rand()/RAND_MAX))+1;
printf("The random value for iteration = %d is %2.4f \n", i, u);
}
}
It successfully generates floating point random numbers between 1 and 4 on an x86 Red Hat Linux machine. But the same program produces 0.0000 as random number on a PPC running Montavista Linux.
Could someone explain why and how to make this work on the PPC Montavista ?

A hunch is that you should be using double instead of float or printing (double)u, since %f takes a double. I was under the impression that floats were automatically promoted to double when passed to a vararg function though.
You could also try printing (int)(u*10000).

Related

C - Subtraction of numbers with many decimals

Why does the C code below output "Difference: 0.000000" ? I need to make calculations with many decimals in one of my university tasks and I don't understand this because I'm new to programming in C. Am I using the correct type? Thanks in advance.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <math.h>
int main() {
long double a = 1.00000001;
long double b = 1.00000000;
long double difference = a-b;
printf("Difference: %Lf", difference);
}
I have tried that code and I'm expecting to get the result: "Difference: 0.00000001"
You see 0.000000 because %Lf prints a fixed number of decimal places, and the default number is 6. In your case, the difference is 1 in the 8th decimal place, which shows as 0.000000 when printed to 6 d.p. Either use %Le or %Lg or specify more precision: %.8Lf.
#include <stdio.h>
int main(void)
{
long double a = 1.00000001;
long double b = 1.00000000;
long double difference = a - b;
printf("Difference: %Lf\n", difference);
printf("Difference: %.8Lf\n", difference);
printf("Difference: %Le\n", difference);
printf("Difference: %Lg\n", difference);
return 0;
}
Note the minimal set of headers.
Output:
Difference: 0.000000
Difference: 0.00000001
Difference: 1.000000e-08
Difference: 1e-08
#include <stdio.h>
int main() {
long double a = 1.000000001;
long double b = 1.000000000;
long double difference = a-b;
printf("Difference: %.9Lf\n", difference);
}
Try this code. Actually, you need to specify to the compiler how much precision you need after the decimal point. Here the .9 will print 9 digits after the decimal point. You can adjust this value according to your needs; just don't exceed the range of the variable.

How to let user decide how many decimal places they want printed in their float?

How would I make a program allow the user to choose how many decimal spaces they would like to see printed in their float value?
For example, the following code
#include <stdio.h>
int main(){
float x;
x = 0.67183377;
printf("%.2f\n", x);
}
Would give us an output of 0.67.But what if the user wanted to see the full number, or up to the fourth decimal place, for example. How would they do this?
See printf() for the details. Use:
printf("%.*f\n", n, x);
where n is an int that contains the number of decimal places to be printed. Note that a float can only really hold about 6-7 decimal digits; the 8th one in the example will be largely irrelevant.
#include <stdio.h>
int main(void)
{
float x = 0.67183377;
for (int n = 0; n < 10; n++)
printf("%.*f\n", n, x);
return(0);
}
Example output:
1
0.7
0.67
0.672
0.6718
0.67183
0.671834
0.6718338
0.67183375
0.671833754
The value is converted to a double before it is passed to printf() because that happens with all variadic functions. When x is changed to a double, the output is:
1
0.7
0.67
0.672
0.6718
0.67183
0.671834
0.6718338
0.67183377
0.671833770

Reading a CSV into a grid in C with 18+ digits of precision

I have a file named "Q.csv" that is a text file containing:
-4.999118817962964961e-07 1.500000000000000197e-08 9.151794806638024753e-08 9.151794806638024753e-08
I'd like to read this as a 2x2 matrix into a C program. Currently my program looks like this.
#include <stdio.h>
#include <stdlib.h>
int main(){
FILE* inf;
inf = fopen("Q.csv","r");
int nrows = 2;
int ncols = 2;
double grid[nrows][ncols];
double t;
size_t x, y;
for (x=0; x<ncols; ++x){
for(y=0; y< nrows; ++y){
fscanf(inf,"%lg ",&t);
grid[x][y]=t;
printf("%lg\n",grid[x][y]);
}
}
fclose(inf);
}
This program works. But it doesn't output all of the precision that was originally in Q.csv - for example, it outputs the first number as -4.99912e-07. If I take away the %lg and change it to %.18g, for example, it outputs the wrong thing. Same if I put %.18lg. In fact, both of these result in the mystery number 4.94065645841246544e-324 repeated (where is this coming from?!)
What can I do to get all of the precision in the file into C?
Input will be read with fscanf(inf,"%lg ",&t); as good as it can.
To print out with 18 digits after the decimal point, use
printf("%.*le\n",18, grid[x][y]);
If double has sufficient precision, check value DBL_DIG which must be at least 10. I suspect double will only be good for 15-17 total digits.
To get all the precision, see Printf width specifier to maintain precision of floating-point value

Multiplication of float with int in c

#include <stdio.h>
int main(){
float var = 0.612;
printf("%f\n",var);
printf("%f\n",var*100);
return 0;
}
o/p
0.612000
61.199997
I found that for JavaScript, we have .tofixed() method.
How do we get a fix for this in C?
You can specify the precision when printing:
printf("%.3f\n", 100 * var);
Since the exact number you're having probably isn't representable in the float itself, there is no operation you can do on the number itself to "remove" the decimals, it's all a matter of how you choose to present the data.

RAND_MAX exhibiting unusual behavior upon recasting

I wish to generate random numbers between 0 and 1. (Obviously, this has application elsewhere.)
My test code:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main() {
double uR;
srand(1);
for(int i=0;i<5;i++){
uR = rand()/(RAND_MAX+1.000);
printf("%d \n", uR);
}
}
And here's the output after the code is compiled with GCC:
gcc -ansi -std=c99 -o rand randtest.c
./rand
0
-251658240
910163968
352321536
-528482304
Upon inspection, it turns out that casting the integer RAND_MAX to a double has the effect of changing its value from 2147483647 to -4194304. This occurs regardless of the method used to change RAND_MAX to type double; so far, I've tried (double)RAND_MAX and double max = RAND_MAX as well.
Why does the number's value change? How can I stop that from happening?
You can't print a double with %d. If you use %f, it works just fine.
You are printing a double value as a decimal integer - which is causing you confusion.
Use %.6f or something similar.
You are passing a double (uR) to printf when it expects a signed int. You should cast it or print with %f
printf("%d \n", (int)uR);

Resources