Big-O small clarification - c

Is O(log(log(n))) actually just O(log(n)) when it comes to time complexity?
Do you agree that this function g() has a time complexity of O(log(log(n)))?
int f(int n) {
if (n <= 1)
return 0;
return f(n/2) + 1;
}
int g(int n) {
int m = f(f(n));
int i;
int x = 0;
for (i = 0; i < m; i++) {
x += i * i;
}
return m;
}

function f(n) computes the logarithm in base 2 of n by repeatedly dividing by 2. It iterates log2(n) times.
Calling it on its own result will indeed return log2(log2(n)) for an additional log2(log2(n)) iterations.
So far the complexity is O(log(N)) + O(log(log(N)). The first term dominates the second, overall complexity is O(log(N)).
The final loop iterates log2(log2(n)) times, time complexity of this last phase is O(log(log(N)), negligible in front of the initial phase.
Note that since x is not used before the end of function g, computing it is not needed and the compiler may well optimize this loop to nothing.
Overall time complexity comes out as O(log(N)), which is not the same as O(log(log(N)).

Looks like it is log(n) + log(log n) + log(log n).
In order: the first recursion of f(), plus the second recursion of f(), and the for loop, so the final complexity is O(log n), because lower order terms are ignored.

int f(int n) {
if (n<=1)
return 0;
return f(n/2) + 1;
}
Has Time Complexity of Order O(log2(n)). Here 2 is base of logrithm.
int g(int n) {
int m = f(f(n)); // O(log2(log2(n))
int i, x=0;
for( i = 0; i < m; i++) {
x += i*i;
}
// This for loop will take O(log2(log2(n))
return m;
}
Hence overall time complexity of given function is :
T(n) = t1 + t2 + t3
But here O(log2(n)) dominates over O(log2(log2(n)).
Hence time complexity of given function is log2(n).
Please read What is a plain English explanation of "Big O" notation? once.

The time consumed by O(log n) algorithms depends only linearly on the number of digits of n. So it is very easy to scale it.
Say you want to compute F(100000000), the 10^8th F....ascci number. For a O(log n) algorithm it is only going to take 4x the time consumed by computing F(100).
O(log log n) terms can show up in a variety of different places, but there are typically two main routes that will arrive at this runtime. Reference link enter code here here.

Related

find time and space complexity

I need to find the time and space complexity of f3.
I think that g has space complexity of log(n), so for the time complexity, but I am not really sure how I find the time and space complexity of f3 because the calling for g is inside the for commend, does it mean the g is being called every time to check if g(i) < n?
int g(int n)
{
if (n <= 1)
return 1;
return g(n / 2) + 1;
}
int f3(int n)
{
int counter = 0;
for (int i = 0; g(i) < n; ++i)
++counter;
return counter;
}
In short:
Time complexity of f3(n) = O(2^n)
Space complexity of f3(n) = O(n)
Space and Time complexity of g(n) = O(log(n))
Note: (here all log that I refer are log base2 link and all notations are in Big O notation)
Details:
The function "g()" returns floor(log(n))+1. The loop in function "f3()" cotinues until the function 'g()' returns n. For g(i) to return 'n', i needs to be '2^(n-1)'.
In order to terminate the loop in "f3()", 'i' needs to reach 2^(n-1). So 'g()' is called 2^(n-1) times.
So Time complexity of "f3()" = 2^(n-1) x (time complexity of g()) = 2^n x (log n) = O(2^n)
Largest memory used will be during the last call of 'g(i)' where i==2^(n-1).
Therefore space complexity of "f3()" = log(2^(n-1)) = O(n)
As g(n) will run in Theta(log(n)) and each time adds 1 to the final result, the final value of g(i) will be \ceil{log(i)} + 1. On the other hand, the loop inside f3 will run while g(i) < n. It means the loop will iterate up to 2^n. Hence, f3 will run 2^n times. Hence, the time complexity of f3(n) = sum_{i=1}^{2^n} log(i) = log((2^n)!) = \Theta(n * 2^n) (as log((2^n)!) ~ (2^n) log(2^n)).
About space complexity, the maximum depth of g(i), will be when i = 2^n, which is n (n recursive calls). Hence, the space complexity of f3 will be \Theta(n).

Complexity of a divide and conquer recursive algorithm

I'm trying to obtain the complexity of a particular divide and conquer algorithm so transpose a given matrix.
From what I've been reading, I got that the recursion should start as follows:
C(1) = 1
C(n) = 4C(n/2) + O(n)
I know how to solve the recursion but I'm not sure if it's right. Everytime the function is called, the problem is divided by 2 (vars fIni and fEnd), and then another 4 functions are called. Also, at the end, swap is called with a complexity of O(n²) so I'm pretty sure I'm not taking that into account in the above recursion.
The code, as follows:
void transposeDyC(int **m,int f,int c, int fIni, int fEnd, int cIni, int cEnd){
if(fIni < fEnd){
int fMed = (fIni+fFin)/2;
int cMed = (cIni+cFin)/2;
transposeDyC(m,f,c, fIni, fMed, cIni, cMed);
transposeDyC(m,f,c, fIni, fMed, cMed+1, cEnd);
transposeDyC(m,f,c, fMed+1, fFin, cIni, cMed);
transposeDyC(m,f,c, fMed+1, fFin, cMed+1, cEnd);
swap(m,f,c, fMed+1, cIni, fIni, cMed+1, fEnd-fMed);
}
}
void swap (int **m,int f, int c,int fIniA, int cIniA, int fIniB, int cIniB, int dimen){
for (int i=0; i<=dimen-1; i++){
for (int j=0; j<=dimen-1; j++) {
int aux = m[fIniA+i][cIniA+j];
m[fIniA+i][cIniA+j] = m[fIniB+i][cIniB+j];
m[fIniB+i][cIniB+j] = aux;
}
}
}
I'm really stuck in this complexity with recursion and divide and conquer. I don't know how to continue.
You got the recursion wrong. It is 4C(n/2) + O(n2), because when joining the matrix back, for a size n, there are total n2 elements.
Two ways:
Master Theorem
Here we have a = 4, b = 2, c = 2, Logba = 2
Since, Logba == c, this falls under the case 2, resulting in the complexity of O(ncLog n) = O(n2 Log n).
Recurrence tree visualization
If you'd try to unfold your recurrence, you can see that you are solving the problem of size n by breaking it down into 4 problems of size n/2 and then doing a work of size n2 (at each level).
Total work done at each level = 4 * Work (n/2) + n2
Total number of levels will be equal to the number of times you'd have to divide the n sized problem until you come to a problem of size 1. That will be simply equal to Log2n.
Therefore, total work = Log(n) (4*(n / 2) + n2), which is O(n2 Log n).
Each recursive step reduces the number of elements by a factor of 4, so the number of levels of recursion will be on the order O(log n). At each level, the swap has order O(n^2), so the algorithm has complexity O((n^2)(log n)).

Time Complexity when inner loop starts with j=2

I get O(n^2 logn) as output of the following code. Yet I am unable to understand why?
int unknown(int n) {
int i, j, k = 0;
for (i = n / 2; i <= n; i++)
for (j = 2; j <= n; j = j * 2)
k = k + n / 2;
return k;
}
A fixed constant starting point will make no difference to the inner loop in terms of complexity.
Starting at two instead of one will mean one less iteration but the ratio is still a logarithmic one.
Think in terms of what happens when you double n. This adds one more iteration to that loop regardless of whether you start at one or two. Hence it's O(log N) complexity.
However, you should keep in mind that the outer loop is an O(N) one since the number of iteratations is proportional to N. That makes the function as a whole O(N log N), not the O(N2 log N) you posit.

Big O of duplicate check function

I would like to know exactly how to compute the big O of the second while when the number of repetitions keeps going down over time.
int duplicate_check(int a[], int n)
{
int i = n;
while (i > 0)
{
i--;
int j = i - 1;
while (j >= 0)
{
if (a[i] == a[j])
{
return 1;
}
j--;
}
}
return 0;
}
Still O(n^2) regardless of the smaller repetition.
The value you are computing is Sum of (n-k) for k = 0 to n.
This equates to (n^2 + n) / 2 which since O() ignores constants and minor terms is O(n^2).
Note you can solve this problem more efficiently by sorting the array O(nlogn) and then searching for two consecutive numbers that are the same O(n) so total O(nlogn)
Big O is an estimate/theoretical speed, it's not the exact calculation.
Like twain249 said, regardless, the time complexity is O(n^2)
BigO shows the worst case time complexity of an algorithm that means the maximum time an algorithm can take ever.It shows upper bound which indicates that whatever the input is time complexity will always be under that bound.
In your case the worst case will when i will iterate until 0 then complexity will be like:
for i=n j will run n-1 times for i=n-1 j will run n-2 times and so on.
adding all (n-1)+(n-2)+(n-3)+............(n-n)=(n-1)*(n)/2=n^2/2-n/2
after ignoring lower term that is n and constant that is 1/2 it becomes n^2.
So O(n^2) that's how it is computed.

Why does this loop return a value that's O(n log log n) and not O(n log n)?

Consider the following C function:
int fun1 (int n)
{
int i, j, k, p, q = 0;
for (i = 1; i<n; ++i)
{
p = 0;
for (j=n; j>1; j=j/2)
++p;
for (k=1; k<p; k=k*2)
++q;
}
return q;
}
The question is to decide which of the following most closely approximates the return value of the function fun1?
(A) n^3
(B) n (logn)^2
(C) nlogn
(D) nlog(logn)
This was the explanation which was given :
int fun1 (int n)
{
int i, j, k, p, q = 0;
// This loop runs T(n) time
for (i = 1; i < n; ++i)
{
p = 0;
// This loop runs T(Log Log n) time
for (j=n; j > 1; j=j/2)
++p;
// This loop runs T(Log Log n) time
for (k=1; k < p; k=k*2)
++q;
}
return q;
}
But Time Complexity of a loop is considered as O(Logn) if the loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
But it was mentioned that the inner loops take Θ(Log Log n) time each , can anyone explain me the reason ar is the answer wrong?
This question is tricky - there is a difference between what the runtime of the code is and what the return value is.
The first loop's runtime is indeed O(log n), not O(log log n). I've reprinted it here:
p = 0;
for (j=n; j > 1; j=j/2)
++p;
On each iteration, the value of j drops by a factor of two. This means that the number of steps required for this loop to terminate is given by the minimum value of k such that n / 2k ≤ 1. Solving, we see that k = O(log2 n).
Notice that each iteration of this loop increases the value of p by one. This means that at the end of the loop, the value of p is Θ(log n). Consequently, this next loop does indeed run in time O(log log n):
for (k=1; k < p; k=k*2)
++q;
}
The reason for this is that, using similar reasoning to the previous section, the runtime of this loop is Θ(log p), and since p = Θ(log n), this ends up being Θ(log log n).
However, the question is not asking what the runtime is. It's asking what the return value is. On each iteration, the value of q, which is what's ultimately returned, increases by Θ(log log n) because it's increased once per iteration of a loop that runs in time Θ(log log n). This means that the net value of q is Θ(n log log n). Therefore, although the algorithm runs in time O(n log n), it returns a value that's O(n log log n).
Hope this helps!
The only thing wrong i see here concerns the second loop:
for (j=n; j>1; j=j/2)
You say in comments : This loop runs Θ(Log Log n) time
As I see it, this loop runs O(Log n) times
The running times for the first and third loops are correct (O(n) and O(Log Log n)).
EDIT: I agree with the previous answer. I did not notice that the question is about the return value, not the running time!
Answer would be (D) O(n * log(log n)). The reason is described below :-
The first for loop encompasses other 2 for loops which are based on the values of j and k respectively. Also, j is getting halved from n, until it is greater than 1. So, p will be equal to greatest integer(log n). And, k is doubling till it equals p --- where p has been set from previous loop and would be equal to [log n], where [x] is equal to greatest integer of x.
So, the third loop will run for log (log n) time, so value of q will be log (log n). And, hence, both the inner loops being part of the outer for-loop which runs for n times.
Approximate value of q = n* log (log n)) = O(n log(log n)) .

Resources