How to demonstrate a better average? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Here's a question on Page 60-61 in A Book On C(3rd Edition) by Al Kelley/Ira Pohl:
The following code fragment shows two different methods of computing a running average:
int i;
double x;
double avg= sum= 0.0;
double navg;
for (i=1; scanf("%lf", &x)==1; ++i)
{
avg+= (x-avg)/i;
sum+= x;
navg= sum/i;
}
The original question as written in the book is: If you input some "ordinary" numbers, the avg and the navg seem to be identical. Demonstrate experimentally that the avg is better, even when sum does not overflow.
My question as a beginning programmer is:
What is the criteria for a "better" algorithm? I believe precision and the amount of running time are two key factors, but are there any other things that make an algorithm "better"?
In terms of precision and running time, how can I demonstrate experimentally that when overflow is ruled out, avg is still a better method than navg is? Should I use numbers that are "out of the ordinary", e.g., of largely different magnitudes?

the two algorithms don't have much difference on running time;
compared with navg, avg is better on precision.
(1)running time:
the following two pieces of code demonstrate that at the magnitude of 1000000, the two algorithms do not have much difference.
#include<stdio.h>
#include<time.h>
int main()
{
int i ;
double x ,sum = 0,avg = 0;
srand(time(NULL));
for(i = 0; i < 1000000 ; i++)
{
x = rand()%10+1;
sum += x;
}
avg = sum/i;
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
#include<stdio.h>
#include<time.h>
int main()
{
double sum = 0,avg = 0;
double x;
int i;
srand(time(NULL));
for(i = 0 ; i < 1000000; i++)
{
x = rand()%10+1;
avg += (x-avg)/(i+1);
}
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
(2)precision:
the code below demonstrates that, adding the differences between avg and every x, the result is 0; while as for navg the result is -2.44718e-005, which means that avg is better on precision.
#include <stdlib.h>
#include <stdio.h>
int main()
{
static double data[1000000];
double sum, avg, check_value;
int i;
int n = sizeof(data)/sizeof(data[0]);
avg = 0;
for( i = 0; i < n; ++ i)
{
avg += ( data[i] - avg) / (i + 1);
}
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\navg += (x[i] - avb) / i:\tavg = %g\t check_value = %g", avg, check_value );
for( i = 0; i < n; ++ i )
{
data[i] = 1.3;
}
sum = 0;
for( i = 0; i < n; ++ i)
{
sum += data[i];
}
avg = sum / n;
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\n avg = sum / N: \tavg = %g\t check_value = %g", avg, check_value );
getchar();
}

I think this is a valid question, although not too well phrased. A problem is that even the question referred to by furins was not phrased well and was closed before receiving a good answer.
The question itself however is interesting especially for the closed one which shows that it was even included in a book, so it could lead more people in one or another direction.
I think neither algorithm is particularly better. In the naive average it looks like we will lose precision, or that when averaging numbers with several magnitude of differences we even lose numbers, but the same may probably be discovered with the other algorithm as well, maybe just with different sets of input data.
So, especially for that it is from an existing book, I think this is a perfectly valid question seeking for some decent answer.
I attempt to try to cover up what I think about the two algorithm by an example. So imagine you have 4 numbers of roughly the same magnitude and you want to average them.
The naive method sums them up first, one after another. After summing the first two you obviously lost one bit of precision at the low end (since you now probably have one larger exponent). When you add the last number, you have 2 bits lost (which bits are now used for representing the high part of the sum). But then you divide by four which in this case essentially is just subtracting 2 from your exponent.
What did we lose during this process? Now it's easier to answer what if all the numbers were truncated by 2 bits first. This case obviously the last two bits of the fraction of the resulting average will be zero, and up to additional 2 bits worth of error may be introduced (if all the truncated bits were happened to be ones in the original numbers compared to if they were zeros). So essentially if the sources were single precision floats with 23 bits of fraction, the resulting avg would now have about 19 bits worth of precision.
The actual result from the naive method is better though as the first numbers summed didn't lose that much precision.
In the differential method in each iteration the appropriately weighted difference is added to the sum. If the numbers are of the same magnitude, then this difference will most likely be somewhere one magnitude below. It is then divided by the current count, in this operation nothing lost, but the resulting difference for the last number (with i=4 in this example) may be about 3 magnitudes below than the source numbers. We add this to the running average which is around the same magnitude as the original numbers.
So with the differential method in this example adding the last number seems to have lost about 3 bits of precision, for all the 4 numbers it may even look like we may be down to 5 essentially lost bits of precision - maybe even worse than the naive method?
The differential method is harder to follow, maybe I did some mistake in my assumptions. But I think it's clear: It just doesn't seem valid to regard one or another performing better, or if so, maybe it depends on the layout and magnitude differences of the data.

Note that you're dividing by zero in your for() loop even if you do ++i

Related

What is a more accurate algorithm I can use to calculate the sine of a number?

I have this code that calculates a guess for sine and compares it to the standard C library's (glibc's in my case) result:
#include <stdio.h>
#include <math.h>
double double_sin(double a)
{
a -= (a*a*a)/6;
return a;
}
int main(void)
{
double clib_sin = sin(.13),
my_sin = double_sin(.13);
printf("%.16f\n%.16f\n%.16f\n", clib_sin, my_sin, clib_sin-my_sin);
return 0;
}
The accuracy for double_sin is poor (about 5-6 digits). Here's my output:
0.1296341426196949
0.1296338333333333
0.0000003092863615
As you can see, after .12963, the results differ.
Some notes:
I don't think the Taylor series will work for this specific situation, the factorials required for greater accuracy aren't able to be stored inside an unsigned long long.
Lookup tables are not an option, they take up too much space and generally don't provide any information on how to calculate the result.
If you use magic numbers, please explain them (although I would prefer if they were not used).
I would greatly prefer an algorithm is easily understandable and able to be used as a reference over one that is not.
The result does not have to be perfectly accurate. A minimum would be the requirements of IEEE 754, C, and/or POSIX.
I'm using the IEEE-754 double format, which can be relied on.
The range supported needs to be at least from -2*M_PI to 2*M_PI. It would be nice if range reduction were included.
What is a more accurate algorithm I can use to calculate the sine of a number?
I had an idea about something similar to Newton-Raphson, but for calculating sine instead. However, I couldn't find anything on it and am ruling this possibility out.
You can actually get pretty close with the Taylor series. The trick is not to calculate the full factorial on each iteration.
The Taylor series looks like this:
sin(x) = x^1/1! - x^3/3! + x^5/5! - x^7/7!
Looking at the terms, you calculate the next term by multiplying the numerator by x^2, multiplying the denominator by the next two numbers in the factorial, and switching the sign. Then you stop when adding the next term doesn't change the result.
So you could code it like this:
double double_sin(double x)
{
double result = 0;
double factor = x;
int i;
for (i=2; result+factor!=result; i+=2) {
result += factor;
factor *= -(x*x)/(i*(i+1));
}
return result;
}
My output:
0.1296341426196949
0.1296341426196949
-0.0000000000000000
EDIT:
The accuracy can be increased further if the terms are added in the reverse direction, however this means computing a fixed number of terms:
#define FACTORS 30
double double_sin(double x)
{
double result = 0;
double factor = x;
int i, j;
double factors[FACTORS];
for (i=2, j=0; j<FACTORS; i+=2, j++) {
factors[j] = factor;
factor *= -(x*x)/(i*(i+1));
}
for (j=FACTORS-1;j>=0;j--) {
result += factors[j];
}
return result;
}
This implementation loses accuracy if x falls outside the range of 0 to 2*PI. This can be fixed by calling x = fmod(x, 2*M_PI); at the start of the function to normalize the value.

How can i reduce the complexity of this code(given below)? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Problem statement:
Find the sum of all the multiples of 3 or 5 below N.
Input Format:
First line contains T that denotes the number of test cases. This is followed by T lines, each containing an integer,N.
Constraints :
1 <= T <= 10^5
1 <= N <= 10^9
Output Format:
For each test case, print an integer that denotes the sum of all the multiples of 3 or 5 below N.
This is my code :
#include <stdio.h>
int main() {
long t,i,x;
scanf("%ld",&t);
long y[t];
for(i=0; i<t; i++) {
scanf("%ld",&x);
long j,k,sum= 0;
if(x<=3)
sum= 0;
else if(x<=5)
sum= 3;
else {
for(j=3; j<x; j+=3)
sum= sum + j;
for(j=5; j<x; j+=5)
if(j%3!=0)
sum = sum + j;
}
y[i] = sum;
}
for(i=0; i<t; i++) {
printf("%ld\n",y[i]);
}
return 0;
}
There is a solution with a time complexity of O(T):
Use the formula for sum of integers 1+2+3+...+n = n*(n+1)/2.
Also note that 3+6+9+...+(3*n) = 3*(1+2+3+...+n) = 3*n*(n+1)/2.
Figure out how many numbers divisible by 3 there are. Calculate their sum.
Figure out how many numbers divisible by 5 there are. Calculate their sum.
Figure out how many numbers divisible by 15 (=3*5) there are. Calculate their sum.
Total sum is sum3 + sum5 - sum15. The numbers divisible by both 3 and 5 (hence by 15) are both in sum3 and in sum5, so unless we subtract sum15 they would be counted twice.
Note that the sum will overflow a 32 bit integer, so make sure you use a 64 bit integer type.
You can achieve constant complexity if you employ the Ancient Power of Mathematics.
First figure out how many numbers divisible by three there are.
Compute their sum.
(You do not need any loops for this, you only need elementary arithmetic.)
Repeat for five and fifteen.
The total is a simple expression involving those three sums.
The details left as an exercise for the reader.

Why doesn't this code work for a[100] and above? (project euler 2 in C)

Here is my code:
I don't understand why it gives me the wrong answer above 50.
#include<stdio.h>
int main()
{
long long int i, sum=0;
long long int a[50];
a[0] = 1;
a[1] = 1;
for(i=2;i<50;i++)
{
a[i] = a[i-1] + a[i-2];
if(a[i]%2==0 && a[i]<4000000)
sum = sum + a[i];
}
printf("%lld", sum);
return 0;
}
Your first mistake was not breaking out of the loop when a term
exceeded 4,000,000. You don’t need to consider terms beyond that for the
stated problem; you don’t need to deal with integer overflow if you stop
there; and you don’t need anywhere near 50 terms to get that far.
Nor, for that matter, do you need to store all of the terms, unless you
want to look at them to check correctness (and simply printing them
would work just as well for that).
You have an integer overflow. Fibonacci numbers get really big. Around F(94) things get out of the range of 64 bit integers (like long long).
F(90) = 2880067194370816120 >= 2^61
F(91) = 4660046610375530309 >= 2^62
F(92) = 7540113804746346429 >= 2^62
F(93) = 12200160415121876738 >= 2^63
F(94) = 19740274219868223167 >= 2^64
F(95) = 31940434634990099905 >= 2^64
F(96) = 51680708854858323072 >= 2^65
When the overflow happens, you will get smaller, or even negative numbers in a instead of the real fibonacci numbers. You need to workaround this overflow.

sse precision error with Matrix multiplication

My program does NxN matrices multiplication where elements of both the matrices are initialized to values (0, 1, 2, ... N) using a for loop. Both the matrix elements are of type float. There is no memory allocation problem. Matrix sizes are input as a multiple of 4 eg: 4x4 or 8x8 etc. The answers are verified with a sequential calculation. Everything works fine upto matrix size of 64x64. A difference between the sequential version and SSE version is observed only when the matrix size exceeds 64 (eg: 68 x 68).
SSE snippet is as shown (size = 68):
void matrix_mult_sse(int size, float *mat1_in, float *mat2_in, float *ans_out) {
__m128 a_line, b_line, r_line;
int i, j, k;
for (k = 0; k < size * size; k += size) {
for (i = 0; i < size; i += 4) {
j = 0;
b_line = _mm_load_ps(&mat2_in[i]);
a_line = _mm_set1_ps(mat1_in[j + k]);
r_line = _mm_mul_ps(a_line, b_line);
for (j = 1; j < size; j++) {
b_line = _mm_load_ps(&mat2_in[j * size + i]);
a_line = _mm_set1_ps(mat1_in[j + k]);
r_line = _mm_add_ps(_mm_mul_ps(a_line, b_line), r_line);
}
_mm_store_ps(&ans_out[i + k], r_line);
}
}
}
With this, the answer differs at element 3673 where I get the answers of multiplication as follows
scalar: 576030144.000000 & SSE: 576030208.000000
I also wrote a similar program in Java with the same initialization and setup and N = 68 and for element 3673, I got the answer as 576030210.000000
Now there are three different answers and I'm not sure how to proceed. Why does this difference occur and how do we eliminate this?
I am summarizing the discussion in order to close this question as answered.
So according to the article (What Every Computer Scientist Should Know About Floating-Point Arithmetic) in link, floating point always results in a rounding error which is a direct consequence of the approximate representation nature of the floating point number.
Arithmetic operations such as addition, subtraction etc results in a precision error. Hence, the 6 most significant digits of the floating point answer (irrespective of where the decimal point is situated) can be considered to be accurate while the other digits may be erroneous (prone to precision error).

What are other mathematical operators one can use to transform an algorithm

The difference operator, (similar to the derivative operator), and the sum operator, (similar to the integration operator), can be used to change an algorithm because they are inverses.
Sum of (difference of y) = y
Difference of (sum of y) = y
An example of using them that way in a c program is below.
This c program demonstrates three approaches to making an array of squares.
The first approach is the simple obvious approach, y = x*x .
The second approach uses the equation (difference in y) = (x0 + x1)*(difference in x) .
The third approach is the reverse and uses the equation (sum of y) = x(x+1)(2x+1)/6 .
The second approach is consistently slightly faster then the first one, even though I haven't bothered optimizing it. I imagine that if I tried harder I could make it even better.
The third approach is consistently twice as slow, but this doesn't mean the basic idea is dumb. I could imagine that for some function other than y = x*x this approach might be faster. Also there is an integer overflow issue.
Trying out all these transformations was very interesting, so now I want to know what are some other pairs of mathematical operators I could use to transform the algorithm?
Here is the code:
#include <stdio.h>
#include <time.h>
#define tries 201
#define loops 100000
void printAllIn(unsigned int array[tries]){
unsigned int index;
for (index = 0; index < tries; ++index)
printf("%u\n", array[index]);
}
int main (int argc, const char * argv[]) {
/*
Goal, Calculate an array of squares from 0 20 as fast as possible
*/
long unsigned int obvious[tries];
long unsigned int sum_of_differences[tries];
long unsigned int difference_of_sums[tries];
clock_t time_of_obvious1;
clock_t time_of_obvious0;
clock_t time_of_sum_of_differences1;
clock_t time_of_sum_of_differences0;
clock_t time_of_difference_of_sums1;
clock_t time_of_difference_of_sums0;
long unsigned int j;
long unsigned int index;
long unsigned int sum1;
long unsigned int sum0;
long signed int signed_index;
time_of_obvious0 = clock();
for (j = 0; j < loops; ++j)
for (index = 0; index < tries; ++index)
obvious[index] = index*index;
time_of_obvious1 = clock();
time_of_sum_of_differences0 = clock();
for (j = 0; j < loops; ++j)
for (index = 1, sum_of_differences[0] = 0; index < tries; ++index)
sum_of_differences[index] = sum_of_differences[index-1] + 2 * index - 1;
time_of_sum_of_differences1 = clock();
time_of_difference_of_sums0 = clock();
for (j = 0; j < loops; ++j)
for (signed_index = 0, sum0 = 0; signed_index < tries; ++signed_index) {
sum1 = signed_index*(signed_index+1)*(2*signed_index+1);
difference_of_sums[signed_index] = (sum1 - sum0)/6;
sum0 = sum1;
}
time_of_difference_of_sums1 = clock();
// printAllIn(obvious);
printf(
"The obvious approach y = x*x took, %f seconds\n",
((double)(time_of_obvious1 - time_of_obvious0))/CLOCKS_PER_SEC
);
// printAllIn(sum_of_differences);
printf(
"The sum of differences approach y1 = y0 + 2x - 1 took, %f seconds\n",
((double)(time_of_sum_of_differences1 - time_of_sum_of_differences0))/CLOCKS_PER_SEC
);
// printAllIn(difference_of_sums);
printf(
"The difference of sums approach y = sum1 - sum0, sum = (x - 1)x(2(x - 1) + 1)/6 took, %f seconds\n",
(double)(time_of_difference_of_sums1 - time_of_difference_of_sums0)/CLOCKS_PER_SEC
);
return 0;
}
There are two classes of optimizations here: strength reduction and peephole optimizations.
Strength reduction is the usual term for replacing "expensive" mathematical functions with cheaper functions -- say, replacing a multiplication with two logarithm table lookups, an addition, and then an inverse logarithm lookup to find the final result.
Peephole optimizations is the usual term for replacing something like multiplication by a power of two with left shifts. Some CPUs have simple instructions for these operations that run faster than generic integer multiplication for the specific case of multiplying by powers of two.
You can also perform optimizations of individual algorithms. You might write a * b, but there are many different ways to perform multiplication, and different algorithms perform better or worse under different conditions. Many of these decisions are made by the chip designers, but arbitrary-precision integer libraries make their own choices based on the merits of the primitives available to them.
When I tried to compile your code on Ubuntu 10.04, I got a segmentation fault right when main() started because you are declaring many megabytes worth of variables on the stack. I was able to compile it after I moved most of your variables outside of main to make them be global variables.
Then I got these results:
The obvious approach y = x*x took, 0.000000 seconds
The sum of differences approach y1 = y0 + 2x - 1 took, 0.020000 seconds
The difference of sums approach y = sum1 - sum0, sum = (x - 1)x(2(x - 1) + 1)/6 took, 0.000000 seconds
The program runs so fast it's hard to believe it really did anything. I put the "-O0" option in to disable optimizations but it's possible GCC still might have optimized out all of the computations. So I tried adding the "volatile" qualifier to your arrays but still got similar results.
That's where I stopped working on it. In conclusion, I don't really know what's going on with your code but it's quite possible that something is wrong.

Resources