I am currently learning about Big O Notation running times. I try to calculate the time complexity of some code:
int i = 1;
int n = 3; //this variable is unknown
int j;
while (i<=n)
{
for (j = 1; j < i; j++)
printf_s("*");
j *= 2;
i *= 3;
}
I think that complexity of this code is О(log n). But even if it is correct, I can`t explain why.
The time complexity is not O(log n), it is O(n).
We can calculate that in a structured way. First we examine the inner loop:
for (j = 1; j < i; j++)
printf_s("*");
Here j iterates from 1 to i. So that means that for a given i, it will take i-1 steps.
Now we can look at the outer loop, and we can abstract away the inner loop:
while (i<=n)
{
// ... i-1 steps ...
j *= 2;
i *= 3;
}
So each iteration of the while loop, we perform i-1 steps. Furthermore each iteration the i doubles, until it is larger than n. We thus can say that the number of steps of this algorithm is:
log3 n
---
\ k
/ 3 - 1
---
k=0
We here use k as an extra variable that starts at 0 and each time increments. It thus counts how many times we perform the body of the while loop. It will end when 3^k > n, hence we will iterate log3(n) times, and each iteration the inner loop will resut in 3k-1 steps.
The above sum is equivalent to:
log3 n
---
\ k
-log3 n + / 3
---
k=0
The above is a geometric series [wiki], which is equal to: (1-3log3n)/(1-3), or simplified, it is equal to (nlog33-1)/2, and hence (n-1)/2.
The total number of steps is thus bounded by: (n-1)/2 - log3n, or formulated more simply O(n).
The body of the inner loop is going to be executed 1, 3, 9, 27, ..., 3^k times, where k = ceil(log3(n)).
Here we can use the fact that Σ0 <= i < k3i <= 3k. One can prove it by induction.
So we can say that the inner loop executes no more than 2*3^k times, where 3^k < 3n, which is linear in n, namely O(n).
First of all, you're really calculating the running time, but the number of time-consuming operations. Here, each call to printf_s is one.
Sometimes if you're not good at maths, you can still find the number with experimentation. The algorithm compiled with -O3 is quite fast to be tested with various n. I replaced printf_s with a simple increment to a counter that is then returned from the function, and use unsigned long long as the type. With those changes we get
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <inttypes.h>
unsigned long long alg(unsigned long long n) {
unsigned long long rv = 0;
unsigned long long i = 1;
unsigned long long j;
while (i <= n) {
for (j = 1; j < i; j++)
rv += 1;
i *= 3;
}
return rv;
}
int main(void) {
unsigned long long n = 1;
for (n = 1; n <= ULONG_MAX / 10; n *= 10) {
unsigned long long res = alg(n);
printf("%llu %llu %f\n", n, res, res/(double)n);
}
}
the program runs in 0.01 seconds because GCC is clever enough to completely eliminate the inner loop. The output is
1 0 0.000000
10 10 1.000000
100 116 1.160000
1000 1086 1.086000
10000 9832 0.983200
100000 88562 0.885620
1000000 797148 0.797148
10000000 7174438 0.717444
100000000 64570064 0.645701
1000000000 581130714 0.581131
10000000000 5230176580 0.523018
100000000000 141214768216 1.412148
1000000000000 1270932914138 1.270933
10000000000000 11438396227452 1.143840
100000000000000 102945566047294 1.029456
1000000000000000 926510094425888 0.926510
10000000000000000 8338590849833250 0.833859
100000000000000000 75047317648499524 0.750473
1000000000000000000 675425858836496006 0.675426
And from that we can see that the ratio of number of prints to n is not really converging, but it seems to be very much bounded by constants on both sides, thus O(n).
Related
I've tried this problem from Project Euler where I need to calculate the sum of all primes until two million.
This is the solution I've come up with -
#include <stdio.h>
int main() {
long sum = 5; // Already counting 2 and 3 in my sum.
int i = 5; // Checking from 5
int count = 0;
while (i <= 2000000) {
count = 0;
for (int j = 3; j <= i / 2; j += 2) {
// Checking if i (starting from 5) is divisible from 3
if (i % j == 0) { // to i/2 and only checking for odd values of j
count = 1;
}
}
if (count == 0) {
sum += i;
}
i += 2;
}
printf("%ld ", sum);
}
It takes around 480 secs to run and I was wondering if there was a better solution or tips to improve my program.
________________________________________________________
Executed in 480.95 secs fish external
usr time 478.54 secs 0.23 millis 478.54 secs
sys time 1.28 secs 6.78 millis 1.28 secs
With two little modifications your code becomes magnitudes faster:
#include <stdio.h>
#include <math.h>
int main() {
long long sum = 5; // we need long long, long might not be enough
// depending on your platform
int i = 5;
int count = 0;
while (i <= 2000000) {
count = 0;
int limit = sqrt(i); // determine upper limit once and for all
for (int j = 3; j <= limit; j += 2) { // use upper limit sqrt(i) instead if i/2
if (i % j == 0) {
count = 1;
break; // break out from loop as soon
// as number is not prime
}
}
if (count == 0) {
sum += i;
}
i += 2;
}
printf("%lld ", sum); // we need %lld for long long
}
All explanations are in the comments.
But there are certainly better and even faster ways to do this.
I ran this on my 10 year old MacPro and for the 20 million first primes it took around 30 seconds.
This program computes near instantly (even in Debug...) the sum for 2 millions, just need one second for 20 millions (Windows 10, 10 years-old i7 # 3.4 GHz, MSVC 2019).
Note: Didn't had time to set up my C compiler, it's why there is a cast on the malloc.
The "big" optimization is to store square values AND prime numbers, so absolutely no impossible divisor is tested. Since there is no more than 1/10th of primes within a given integer interval (heuristic, a robust code should test that and realloc the primes array when needed), the time is drastically cut.
#include <stdio.h>
#include <malloc.h>
#define LIMIT 2000000ul // Computation limit.
typedef struct {
unsigned long int p ; // Store a prime number.
unsigned long int sq ; // and its square.
} prime ;
int main() {
prime* primes = (prime*)malloc((LIMIT/10)*sizeof(*primes)) ; // Store found primes. Can quite safely use 1/10th of the whole computation limit.
unsigned long int primes_count=1 ;
unsigned long int i = 3 ;
unsigned long long int sum = 0 ;
unsigned long int j = 0 ;
int is_prime = 1 ;
// Feed the first prime, 2.
primes[0].p = 2 ;
primes[0].sq = 4 ;
sum = 2 ;
// Parse all numbers up to LIMIT, ignoring even numbers.
// Also reset the "is_prime" flag at each loop.
for (i = 3 ; i <= LIMIT ; i+=2, is_prime = 1 ) {
// Parse all previously found primes.
for (j = 0; j < primes_count; j++) {
// Above sqrt(i)? Break, i is a prime.
if (i<primes[j].sq)
break ;
// Found a divisor? Not a prime (and break).
if ((i % primes[j].p == 0)) {
is_prime = 0 ;
break ;
}
}
// Add the prime and its square to the array "primes".
if (is_prime) {
primes[primes_count].p = i ;
primes[primes_count++].sq = i*i ;
// Compute the sum on-the-fly
sum += i ;
}
}
printf("Sum of all %lu primes: %llu\n", primes_count, sum);
free(primes) ;
}
Your program can easily be improved by stopping the inner loop earlier:
when i exceeds sqrt(j).
when a divisor has been found.
Also note that type long might not be large enough for the sum on all architectures. long long is recommended.
Here is a modified version:
#include <stdio.h>
int main() {
long long sum = 5; // Already counting 2 and 3 in my sum.
long i = 5; // Checking from 5
while (i <= 2000000) {
int count = 0;
for (int j = 3; j * j <= i; j += 2) {
// Checking if i (starting from 5) is divisible from 3
if (i % j == 0) { // to i/2 and only checking for odd values of j
count = 1;
break;
}
}
if (count == 0) {
sum += i;
}
i += 2;
}
printf("%lld\n", sum);
}
This simple change drastically reduces the runtime! It is more than 1000 times faster for 2000000:
chqrlie> time ./primesum
142913828922
real 0m0.288s
user 0m0.264s
sys 0m0.004s
Note however that trial division is much less efficient than the classic sieve of Eratosthenes.
Here is a simplistic version:
#include <stdio.h>
#include <stdlib.h>
int main() {
long max = 2000000;
long long sum = 0;
// Allocate an array of indicators initialized to 0
unsigned char *composite = calloc(1, max + 1);
// For all numbers up to sqrt(max)
for (long i = 2; i * i <= max; i++) {
// It the number is a prime
if (composite[i] == 0) {
// Set all multiples as composite. Multiples below the
// square of i are skipped because they have already been
// set as multiples of a smaller prime.
for (long j = i * i; j <= max; j += i) {
composite[j] = 1;
}
}
}
for (long i = 2; i <= max; i++) {
if (composite[i] == 0)
sum += i;
}
printf("%lld\n", sum);
free(composite);
return 0;
}
This code is another 20 times faster for 2000000:
chqrlie> time ./primesum-sieve
142913828922
real 0m0.014s
user 0m0.007s
sys 0m0.002s
The sieve approach can be further improved in many ways for larger boundaries.
Here is a code to exponentiate a number to a given power:
#include <stdio.h>
int foo(int m, int k) {
if (k == 0) {
return 1;
} else if (k % 2 != 0) {
return m * foo(m, k - 1);
} else {
int p = foo(m, k / 2);
return p * p;
}
}
int main() {
int m, k;
while (scanf("%d %d", &m, &k) == 2) {
printf("%d\n", foo(m, k));
}
return 0;
}
How do I calculate the time complexity of the function foo?
I have been able to deduce that if k is a power of 2, the time complexity is O(log k).
But I am finding it difficult to calculate for other values of k. Any help would be much appreciated.
How do I calculate the time complexity of the function foo()?
I have been able to deduce that if k is a power of 2, the time complexity is O(logk).
First, I assume that the time needed for each function call is constant (this would for example not be the case if the time needed for a multiplication depends on the numbers being multiplied - which is the case on some computers).
We also assume that k>=1 (otherwise, the function will run endlessly unless there is an overflow).
Let's think the value k as a binary number:
If the rightmost bit is 0 (k%2!=0 is false), the number is shifted right by one bit (foo(m,k/2)) and the function is called recursively.
If the rightmost bit is 1 (k%2!=0 is true), the bit is changed to a 0 (foo(m,k-1)) and the function is called recursively. (We don't look at the case k=1, yet.)
This means that the function is called once for each bit and it is called once for each 1 bit. Or, in other words: It is called once for each 0 bit in the number and twice for each 1 bit.
If N is the number of function calls, n1 is the number of 1 bits and n0 is the number of 0 bits, we get the following formula:
N = n0 + 2*n1 + C
The constant C (C=(-1), if I didn't make a mistake) represents the case k=1 that we ignored up to now.
This means:
N = (n0 + n1) + n1 + C
And - because n0 + n1 = floor(log2(k)) + 1:
floor(log2(k)) + C <= N <= 2*floor(log2(k)) + C
As you can see, the time complexity is always O(log(k))
O(log(k))
Some modification added to output a statistics for spread sheet plot.
#include <stdio.h>
#include <math.h>
#ifndef TEST_NUM
#define TEST_NUM (100)
#endif
static size_t iter_count;
int foo(int m, int k) {
iter_count++;
if (k == 0) {
return 1;
} else if(k == 1) {
return m;
} else if (k % 2 != 0) {
return m * foo(m, k - 1);
} else {
int p = foo(m, k / 2);
return p * p;
}
}
int main() {
for (int i = 1; i < TEST_NUM; ++i) {
iter_count = 0;
int dummy_result = foo(1, i);
printf("%d, %zu, %f\n", i, iter_count, log2(i));
}
return 0;
}
Build it.
gcc t1.c -DTEST_NUM=10000
./a > output.csv
Now open the output file with a spread sheet program and plot the last two output columns.
For k positive, the function foo calls itself recursively p times if k is the p-th power of 2. If k is not a power of 2, the number of recursive calls is strictly inferior to 2 * p where p is the exponent of the largest power of 2 inferior to k.
Here is a demonstration:
let's expand the recursive call in the case k % 2 != 0:
int foo(int m, int k) {
if (k == 1) {
return m;
} else
if (k % 2 != 0) { /* 2 recursive calls */
// return m * foo(m, k - 1);
int p = foo(m, k / 2);
return m * p * p;
} else { /* 1 recursive call */
int p = foo(m, k / 2);
return p * p;
}
}
The total number of calls is floor(log2(k)) + bitcount(k), and bitcount(k) is by construction <= ceil(log2(k)).
There are no loops in the code and the time of each individual call is bounded by a constant, hence the overall time complexity of O(log k).
The number of times the function is called (recursively or not) per power call is proportional to the minimum number of bits in the exponent required to represent it in binary form.
Each time you enter in the function, it solves by reducing the number by one if the exponent is odd, OR reducing it to half if the exponent is even. This means that we will do n squares per significant bit in the number, and m more multiplications by the base for all the bits that are 1 in the exponent (which are, at most, n, so m < n) for a 32bit significant exponent (this is an exponent between 2^31 and 2^32 the routine will do between 32 and 64 products to get the result, and will reenter to itself a maximum of 64 times)
as in both cases the routine is tail-recursive, the code you post can be substituted with an iterative code in which a while loop is used to solve the problem.
int foo(int m, int k)
{
int prod = 1; /* last recursion foo(m, 0); */
int sq = m; /* squares */
while (k) {
if (k & 1) {
prod *= sq; /* foo(m, k); k odd */
}
k >>= 1;
sq *= sq;
}
return prod; /* return final product */
}
That's huge savings!!! (between 32 multiplications and 64 multiplications, to elevate something to 1,000,000,000 power)
I am making a library management in C for practice. Now, in studentEntry I need to generate a long int studentID in which every digit is non-zero. So, I am using this function:
long int generateStudentID(){
srand(time(NULL));
long int n = 0;
do
{
n = rand() % 10;
}while(n == 0);
int i;
for(i = 1; i < 10; i++)
{
n *= 10;
n += rand() % 10;
}
if(n < 0)
n = n * (-1); //StudentID will be positive
return n;
}
output
Name : khushit
phone No. : 987546321
active : 1
login : 0
StudentID : 2038393052
Wanted to add another student?(y/n)
I wanted to remove all zeros from it. Moreover, when I run the program the first time the random number will be the same as above, and second time random number is same as past runs like e.g:-
program run 1
StudentID : 2038393052
StudentID : 3436731238
program run 2
StudentID : 2038393052
StudentID : 3436731238
What do I need to fix these problems?
You can either do as gchen suggested and run a small loop that continues until the result is not zero (just like you did for the first digit) or accept a small bias and use rand() % 9 + 1.
The problem with the similar sequences has its reason with the coarse resolution of time(). If you run the second call of the function to fast after the first you get the same seed. You might read this description as proposed by user3386109 in the comments.
A nine-digit student ID with no zeros in the number can be generated by:
long generateStudentID(void)
{
long n = 0;
for (int i = 0; i < 9; i++)
n = n * 10 + (rand() % 9) + 1;
return n;
}
This generates a random digit between 1 and 9 by generating a digit between 0 and 8 with (rand() % 9) and adding 1. There's no need to for loops to avoid zeros.
Note that this does not call srand() — you should only call srand() once in a given program (under normal circumstances). Since a long must be at least 32 bits and a 9-digit number only requires 30 bits, there cannot be overflow to worry about.
It's possible to argue that the result is slightly biassed in favour of smaller digits. You could use a function call to eliminate that bias:
int unbiassed_random_int(int max)
{
int limit = RAND_MAX - RAND_MAX % max;
int value;
while ((value = rand()) >= limit)
;
return value % max;
}
If RAND_MAX is 32767 and max is 9, RAND_MAX % 9 is 7. If you don't ignore the values from 32760 upwards, you are more likely to get a digit in the range 0..7 than you are to get an 8 — there are 3642 ways to each of 0..7 and only 3641 ways to get 8. The difference is not large; it is smaller if RAND_MAX is bigger. For the purposes on hand, such refinement is not necessary.
Slightly modify the order of your original function should perform the trick. Instead of removing 0s, just do not add 0s.
long int generateStudentID(){
srand(time(NULL));
long int n = 0;
for(int i = 0; i < 10; i++)
{
long int m = 0;
do
{
m = rand() % 10;
}while(m == 0);
n *= 10;
n += m;
}
//Not needed as n won't be negative
//if(n < 0)
//n = n * (-1); //StudentID will be positive
return n;
}
Problem : Finding n prime numbers.
#include<stdio.h>
#include<stdlib.h>
void firstnprimes(int *a, int n){
if (n < 1){
printf("INVALID");
return;
}
int i = 0, j, k; // i is the primes counter
for (j = 2; i != n; j++){ // j is a candidate number
for (k = 0; k < i; k++)
{
if (j % a[k] == 0) // a[k] is k-th prime
break;
}
if (k == i) // end-of-loop was reached
a[i++] = j; // record the i-th prime, j
}
return;
}
int main(){
int n;
scanf_s("%d",&n);
int *a = (int *)malloc(n*sizeof(int));
firstnprimes(a,n);
for (int i = 0; i < n; i++)
printf("%d\n",a[i]);
system("pause");
return 0;
}
My function's inner loop runs for i times (at the most), where i is the number of prime numbers below a given candidate number, and the outer loop runs for (nth prime number - 2) times.
How can I derive the complexity of this algorithm in Big O notation?
Thanks in advance.
In pseudocode your code is
firstnprimes(n) = a[:n] # array a's first n entries
where
i = 0
a = [j for j in [2..]
if is_empty( [j for p in a[:i] if (j%p == 0)] )
&& (++i) ]
(assuming the short-circuiting is_empty which returns false as soon as the list is discovered to be non-empty).
What it does is testing each candidate number from 2 and up by all its preceding primes.
Melissa O'Neill analyzes this algorithm in her widely known JFP article and derives its complexity as O( n^2 ).
Basically, each of the n primes that are produced is paired up with (is tested by) all the primes preceding it (i.e. k-1 primes, for the k th prime) and the sum of the arithmetic progression 0...(n-1) is (n-1)n/2 which is O( n^2 ); and she shows that composites do not contribute any term which is more significant than that to the overall sum, as there are O(n log n) composites on the way to n th prime but the is_empty calculation fails early for them.
Here's how it goes: with m = n log n, there will be m/2 evens, for each of which the is_empty calculation takes just 1 step; m/3 multiples of 3 with 2 steps; m/5 with 3 steps; etc.
So the total contribution of the composites, overestimated by not dealing with the multiplicities (basically, counting 15 twice, as a multiple of both 3 and 5, etc.), is:
SUM{i = 1, ..., n} (i m / p_i) // p_i is the i-th prime
= m SUM{i = 1, ..., n} (i / p_i)
= n log(n) SUM{i = 1, ..., n} (i / p_i)
< n log(n) (n / log(n)) // for n > 14,000
= n^2
The inequality can be tested at Wolfram Alpha cloud sandbox as Sum[ i/Prime[i], {i, 14000}] Log[14000.0] / 14000.0 (which is 0.99921, and diminishing for bigger n, tested up to n = 2,000,000 where it's 0.963554).
The prime number theorem states that asymptotically, the number of primes less than n is equal to n/log n. Therefore, your inner loop will run Theta of i * max =n / log n * n times (assuming max=n).
Also, your outer loop runs on the order of n log n times, making the total complexity Theta of n / log n * n * n log n = n^3. In other words, this is not the most efficient algorithm.
Note that there are better approximations around (e.g. the n-th prime number is closer to:
n log n + n log log n - n + n log log n / log n + ...
But, since you are concerned with just big O, this approximation is good enough.
Also, there are much better algorithms for doing what you're looking to do. Look up the topic of pseudoprimes, for more information.
I'm struggling to find the time complexity of this function:
void foo(int n) {
int i, m = 1;
for (i = 0; i < n; i++) {
m *= n; // (m = n^n) ??
}
while (m > 1) {
m /= 3;
}
}
Well, the first for iteration is clearly O(n^n), the explanation to it is because m started with value 1, and multiplies itself n times.
Now, we start the while loop with m = n^n and we divide it every time by 3.
which means, (I guess), log(n^n).
Assuming I got it right up till now, I'm not sure if I need to sum or multiply, but my logic says I need to sum them, because they are 'odd' to each other.
So my assumption is: O(n^n) + O(log(n^n)) = O(n^n) Because if n is quite big, we can just refrain from O(log(n^n)).
Well, I really made many assumptions here, and I hope that makes sense. I'd love to hear your opinions about the time complexity of this function.
Theoretically, time complexity is O(n log n) because:
for (i=0; i<n; i++)
m *= n;
this will be executed n times and in the end m=n^n
Then this
while (m>1)
m /= 3;
will be executed log3(n^n) times which is n * log3(n):
P.S. But this is only if you count number of operations. In real life it takes much more time to calculate n^n because the numbers become too big. Also your function will overflow when you will be multiplying such big numbers and most probably you will be bounded by the maximum number of int (in which case the complexity will be O(n))
With foo(int n) and 32-bit int, n cannot exceed the magnitude of 10, else m *= n overflows.
Given such a small range that n works, the O() seems moot. Even with 64-bit unsigned m, n <= 15.
So I suppose O(n lg(n)) is technically correct, but given the constraints of int, suspect code took more time to do a single printf() than iterate through foo(10). IOWs it is practically O(1).
unsigned long long foo(int n) {
unsigned long long cnt = 0;
int i;
unsigned long long m = 1;
for (i = 0; i < n; i++) {
if (m >= ULLONG_MAX/n) exit(1);
m *= n; // (m = n^n) ??
cnt++;
}
while (m > 1) {
m /= 3;
cnt++;
}
return cnt;
}
And came up with
1 1
2 3
3 6
4 9
5 12
6 16
7 19
8 23
9 27
10 31
11 35
12 39
13 43
14 47
15 52