Time complexity of a simple algorithm - c

Hello and sorry for my bad english.
I'm still trying to estimate a complexity of a following algorithm.
There is:
int f = 1, n, x, licznik = 0;
printf("Variable of n: ");
scanf("%d", &n);
printf("Variable of x: ");
scanf("%d", &x);
while(n > 0) {
if(n%2 == 0) {
x = x*x;
n = n/2;
licznik++;
}
else {
f = f*x;
n = n-1;
licznik++;
}
}
My observation:
When n = / then / licznik =
n = 0
l = 0
n = 10
l = 5
n = 100
l = 9
n = 1000
l = 15
n = 10000
l = 18
n = 1 000 000
l = 26
So it's still growing but very slowly. So it's looks like a "log n" function.
It's a good answer that time complexity for this algorithm is O(log n)? What about best option and the worst? Thanks for your help.
PS: "licznik" is a number of multiplications.

If n is an odd number you will always decrement it by one, making it even and guaranteeing that it will get divided into half in the next iteration.
In the worst case you get 2*log n which is O(log n) complexity.

Ok, lets take a wider look..
in your algorithm, once the value is halved and next time its decremented by 1.. since the decrement is almost negligible to the division (for very large values).. we can consider that the value (approximately) is just being divided once in 2 iterations.. Then the time complexity becomes `O(2logn).. which is really O(logn)..

Related

The time complexity answer to the question confuses me - n^(2/3)

I'm trying to figure out why the time complexity of this code is n2/3. The space complexity is log n, but I don't know how to continue the time complexity calculation (or if it's right).
int g2 (int n, int m)
{
if (m >= n)
{
for (int i = 0; i < n; ++i)
printf("#");
return 1;
}
return 1 + g2 (n / 2, 4 * m);
}
int main (int n)
{
return g2 (n, 1);
}
As long as m < n, you perform an O(1) operation: making a recursive call. You halve n and quadruple m, so after k steps, you get
n(k) = n(0) * 0.5^k
m(k) = m(0) * 4^k
You can set them equal to each other to find that
n(0) / m(0) = 8^k
Taking the log
log(n(0)) - log(m(0)) = k log(8)
or
k = log_8(n(0)) - log_8(m(0))
On the kth recursion you perform n(k) loop iterations.
You can plug k back into n(k) = n(0) * 0.5^k to estimate the number of iterations. Let's ignore m(0) for now:
n(k) = n(0) * 0.5^log_8(n(0))
Taking again the log of both sides,
log_8(n(k)) = log_8(n(0)) + log_8(0.5) * log_8(n(0))
Since log_8(0.5) = -1/3, you get
log_8(n(k)) = log_8(n(0)) * (2/3)`
Taking the exponent again:
n(k) = n(0)^(2/3)
Since any positive exponent will overwhelm the O(log(n)) recursion, your final complexity is indeed O(n^(2/3)).
Let's look for a moment what happens if m(0) > 1.
n(k) = n(0) * 0.5^(log_8(n(0)) - log_8(m(0)))
Again taking the log:
log_8(n(k)) = log_8(n(0)) - 1/3 * (log_8(n(0)) - log_8(m(0)))
log_8(n(k)) = log_8(n(0)^(2/3)) + log_8(m(0)^(1/3))
So you get
n(k) = n(0)^(2/3) * m(0)^(1/3)
Or
n(k) = (m n^2)^(1/3)
Quick note on corner cases in the starting conditions:
For m > 0:
If n <= 0:, n <= m is immediately true and the recursion terminates and there is no loop.
For m < 0:
If n <= m, the recursion terminates immediately and there is no loop. If n > m, n will converge to zero while m diverges, and the algorithm will run forever.
The only interesting case is where m == 0. Regardless of whether n is positive or negative, it will reach zero because of integer truncation, so the complexity depends on when it reaches 1:
n(0) * 0.5^k = 1
log_2(n(0)) - k = 0
So in this case, the runtime of the recursion is still O(log(n)). The loop does not run.
m starts at 1, and at each step n -> n/2 and m -> m*4 until m>n. After k steps, n_final = n/2^k and m_final = 4^k. So the final value of k is where n/2^k = 4^k, or k = log8(n).
When this is reached, the inner loop performs n_final (approximately equal to m_final) steps, leading to a complexity of O(4^k) = O(4^log8(n)) = O(4^(log4(n)/log4(8))) = O(n^(1/log4(8))) = O(n^(2/3)).

What is the fastest way to determine n^m = m^n if m != n?

It's one of the questions from the practice section in hackerearth. We need to determine whether m^n = n^m. It is trivial when n = m, so we focus on m != n.
1 <= m, n <= 10^10000
What I have tried is
n^m = m^n implies
m*log(n) = n*log(m)
But the problem is how to take log of such a huge number? Is there an alternative way of checking equality?
Let's do some math to determine how to program for solutions:
It is easy to see that neither n nor m can equal 1 as they must be different and cannot be null. Let's look for solutions where 1 < n < m.
As you correctly stated, nm = mn implies m.log(n) = n.log(m)
hence we are looking for solutions to m / log(m) = n / log(n) and 1 < n < m.
We can study the function f(x) = x / log(x) for x > 1:
Its derivative is f'(x) = (log(x) - 1) / log(x)2
The derivative has a single zero at x = e, f(x) is strictly decreasing between x = 1 and x = e and strictly increasing from x = e all the way to infinity.
Any 2 distinct numbers n and m such that f(n) = f(m), n < m must be such that 1 < n < e and m > e
The only possible integer value for n is 2 and it happens to be a solution with the corresponding value of m as 4, there is no other integer in the interval 1 to e.
Hence the only solutions are n=2, m=4 and n=4, m=2
Here is a C program to output all possible results:
#include <stdio.h>
int main() {
printf("2 4\n");
printf("4 2\n");
return 0;
}

Time complexity finding n primes with trial division by all preceding primes

Problem : Finding n prime numbers.
#include<stdio.h>
#include<stdlib.h>
void firstnprimes(int *a, int n){
if (n < 1){
printf("INVALID");
return;
}
int i = 0, j, k; // i is the primes counter
for (j = 2; i != n; j++){ // j is a candidate number
for (k = 0; k < i; k++)
{
if (j % a[k] == 0) // a[k] is k-th prime
break;
}
if (k == i) // end-of-loop was reached
a[i++] = j; // record the i-th prime, j
}
return;
}
int main(){
int n;
scanf_s("%d",&n);
int *a = (int *)malloc(n*sizeof(int));
firstnprimes(a,n);
for (int i = 0; i < n; i++)
printf("%d\n",a[i]);
system("pause");
return 0;
}
My function's inner loop runs for i times (at the most), where i is the number of prime numbers below a given candidate number, and the outer loop runs for (nth prime number - 2) times.
How can I derive the complexity of this algorithm in Big O notation?
Thanks in advance.
In pseudocode your code is
firstnprimes(n) = a[:n] # array a's first n entries
where
i = 0
a = [j for j in [2..]
if is_empty( [j for p in a[:i] if (j%p == 0)] )
&& (++i) ]
(assuming the short-circuiting is_empty which returns false as soon as the list is discovered to be non-empty).
What it does is testing each candidate number from 2 and up by all its preceding primes.
Melissa O'Neill analyzes this algorithm in her widely known JFP article and derives its complexity as O( n^2 ).
Basically, each of the n primes that are produced is paired up with (is tested by) all the primes preceding it (i.e. k-1 primes, for the k th prime) and the sum of the arithmetic progression 0...(n-1) is (n-1)n/2 which is O( n^2 ); and she shows that composites do not contribute any term which is more significant than that to the overall sum, as there are O(n log n) composites on the way to n th prime but the is_empty calculation fails early for them.
Here's how it goes: with m = n log n, there will be m/2 evens, for each of which the is_empty calculation takes just 1 step; m/3 multiples of 3 with 2 steps; m/5 with 3 steps; etc.
So the total contribution of the composites, overestimated by not dealing with the multiplicities (basically, counting 15 twice, as a multiple of both 3 and 5, etc.), is:
SUM{i = 1, ..., n} (i m / p_i) // p_i is the i-th prime
= m SUM{i = 1, ..., n} (i / p_i)
= n log(n) SUM{i = 1, ..., n} (i / p_i)
< n log(n) (n / log(n)) // for n > 14,000
= n^2
The inequality can be tested at Wolfram Alpha cloud sandbox as Sum[ i/Prime[i], {i, 14000}] Log[14000.0] / 14000.0 (which is 0.99921, and diminishing for bigger n, tested up to n = 2,000,000 where it's 0.963554).
The prime number theorem states that asymptotically, the number of primes less than n is equal to n/log n. Therefore, your inner loop will run Theta of i * max =n / log n * n times (assuming max=n).
Also, your outer loop runs on the order of n log n times, making the total complexity Theta of n / log n * n * n log n = n^3. In other words, this is not the most efficient algorithm.
Note that there are better approximations around (e.g. the n-th prime number is closer to:
n log n + n log log n - n + n log log n / log n + ...
But, since you are concerned with just big O, this approximation is good enough.
Also, there are much better algorithms for doing what you're looking to do. Look up the topic of pseudoprimes, for more information.

Find the time complexity of the function "foo"

I'm struggling to find the time complexity of this function:
void foo(int n) {
int i, m = 1;
for (i = 0; i < n; i++) {
m *= n; // (m = n^n) ??
}
while (m > 1) {
m /= 3;
}
}
Well, the first for iteration is clearly O(n^n), the explanation to it is because m started with value 1, and multiplies itself n times.
Now, we start the while loop with m = n^n and we divide it every time by 3.
which means, (I guess), log(n^n).
Assuming I got it right up till now, I'm not sure if I need to sum or multiply, but my logic says I need to sum them, because they are 'odd' to each other.
So my assumption is: O(n^n) + O(log(n^n)) = O(n^n) Because if n is quite big, we can just refrain from O(log(n^n)).
Well, I really made many assumptions here, and I hope that makes sense. I'd love to hear your opinions about the time complexity of this function.
Theoretically, time complexity is O(n log n) because:
for (i=0; i<n; i++)
m *= n;
this will be executed n times and in the end m=n^n
Then this
while (m>1)
m /= 3;
will be executed log3(n^n) times which is n * log3(n):
P.S. But this is only if you count number of operations. In real life it takes much more time to calculate n^n because the numbers become too big. Also your function will overflow when you will be multiplying such big numbers and most probably you will be bounded by the maximum number of int (in which case the complexity will be O(n))
With foo(int n) and 32-bit int, n cannot exceed the magnitude of 10, else m *= n overflows.
Given such a small range that n works, the O() seems moot. Even with 64-bit unsigned m, n <= 15.
So I suppose O(n lg(n)) is technically correct, but given the constraints of int, suspect code took more time to do a single printf() than iterate through foo(10). IOWs it is practically O(1).
unsigned long long foo(int n) {
unsigned long long cnt = 0;
int i;
unsigned long long m = 1;
for (i = 0; i < n; i++) {
if (m >= ULLONG_MAX/n) exit(1);
m *= n; // (m = n^n) ??
cnt++;
}
while (m > 1) {
m /= 3;
cnt++;
}
return cnt;
}
And came up with
1 1
2 3
3 6
4 9
5 12
6 16
7 19
8 23
9 27
10 31
11 35
12 39
13 43
14 47
15 52

Fastest way to calculate all the even squares from 1 to n?

I did this in c :
#include<stdio.h>
int main (void)
{
int n,i;
scanf("%d", &n);
for(i=2;i<=n;i=i+2)
{
if((i*i)%2==0 && (i*i)<= n)
printf("%d \n",(i*i));
}
return 0;
}
What would be a better/faster approach to tackle this problem?
Let me illustrate not only a fast solution, but also how to derive it. Start with a fast way of listing all squares and work from there (pseudocode):
max = n*n
i = 1
d = 3
while i < max:
print i
i += d
d += 2
So, starting from 4 and listing only even squares:
max = n*n
i = 4
d = 5
while i < max:
print i
i += d
d += 2
i += d
d += 2
Now we can shorten that mess on the end of the while loop:
max = n*n
i = 4
d = 5
while i < max:
print i
i += 2 + 2*d
d += 4
Note that we are constantly using 2*d, so it's better to just keep calculating that:
max = n*n
i = 4
d = 10
while i < max:
print i
i += 2 + d
d += 8
Now note that we are constantly adding 2 + d, so we can do better by incorporating this into d:
max = n*n
i = 4
d = 12
while i < max:
print i
i += d
d += 8
Blazing fast. It only takes two additions to calculate each square.
I like your solution. The only suggestions I would make would be:
Put the (i*i)<=n as the middle clause of your for loop, then it's checked earlier and you break out of the loop sooner.
You don't need to check and see if (i*i)%2==0, since 'i' is always positive and a positive squared is always positive.
With those two changes in mind you can get rid of the if statement in your for loop and just print.
Square of even is even. So, you really do not need to check it again. Following is the code, I would suggest:
for (i = 2; i*i <= n; i+=2)
printf ("%d\t", i*i);
The largest value for i in your loop should be the floor of the square root of n.
The reason is that the square of any i (integer) larger than this will be greater than n. So, if you make this change, you don't need to check that i*i <= n.
Also, as others have pointed out, there is no point in checking that i*i is even since the square of all even numbers is even.
And you are right in ignoring odd i since for any odd i, i*i is odd.
Your code with the aforementioned changes follows:
#include "stdio.h"
#include "math.h"
int main ()
{
int n,i;
scanf("%d", &n);
for( i = 2; i <= (int)floor(sqrt(n)); i = i+2 ) {
printf("%d \n",(i*i));
}
return 0;
}

Resources