C loop complexity - c

I'm preparing for an exam and these are some of problems from last year's tests. The task is to calculate both exact and asymptotic complexity. How would you solve it? Universally, if possible.
for ( i = j = 0; i < n; j ++ ) {
doSomething ();
i += j / n;
j %= n;
}
for ( i = 0; i < 2 * n; i += 2 )
for ( j = 1; j <= n; j <<= 1 )
if ( j & i )
doSomething ();
for (i = 0; i < 2*n; i++) {
if ( i > n )
for (j = i; j < 2 * i; j ++ ) doSomething();
else
for (j = n; j < 2 * n; j ++ ) doSomething();
}
Thanks in advance

My solution for the third loop is
t(n) = [ (n-1)*n + ((n-1)*n)/2 ] *D + [ n^2 +n ] *D + [ 2n ]*I
so it is in O(n^2) given that doSomething() has a constant time
and that i and j are integers.
The second term ( [ n^2 +n ] *D ) is fairly easy.
The loop
for (j = n; j < 2 * n; j ++ ) doSomething();
gets called while i <= n so it will be called n+1 times, since it starts from 0.
The loop for (j = n; j < 2 * n; j ++ ) calls doSomething() n times, so we have (n+1)*n*D = [n^2+n] *D. I assume that doSomething() has a constant time which is equal to D
The first term is a little bit more complex.
for (j = i; j < 2 * i; j ++ ) doSomething();
gets called when i>n so it will be called n-1 times.
The loop calls doSomething() i times.
The first time it gets called n+1, the second time ´n+2´ and so on until it is 2n-1 which is equal to n + (n-1).
So we get a sequence likes this {n+1, n+2, n+3, ... , n+(n-1)}.
If we sum up the sequence we get n-1 times n and the sum 1+2+3+...+ (n-1).
The last term can be solved with the "Gaußsche Summenformel" (sorry I don't have the English name for it but you can see the formula in the German wiki link) so it is equal to ((n-1)*n)/2
So the first term is (n-1) * n + ((n-1)*n)/2 *D
And the last term is therefor the if statement which is called 2*n*I, where I is the time to execute the If statement.

Well, the question here is, for all three loop structures, how the amount of iterations changes in proportion to n, right? so let's look at the loops. I'll omit the first one, since you solved it already.
for ( i = 0; i < 2 * n; i += 2 )
for ( j = 1; j <= n; j <<= 1 )
if ( j & i )
doSomething ();
the outer for loop obviously runs exactly n times. the inner loop runs log_2(n) times, because of the bitwise shift operation. The if clause runs in constant time, so the entire loop is in O(n * log_2(n)), assuming that doSomething() is in constant time as well.
Does that make it clearer? :)

As per request, I will explain how I came to the result that the first loop is equal to a construction like this:
int i, j;
for (i=0; i < n; i++) {
for (j=0; j <= n; j++) {
doSomething();
}
}
First of all, I must admit that before I really thought about it, I just wrote a little sample program including the first of the three for-loops that prints out i and j during the iteration. After I've seen the results, I was thinking about why the results are like this.
In the comment, I forgot to add that I defined n=200.
Explanation
We can say, that although j is incremented regularly every step in the iteration, it will never exceed a value of n. Why? After n iterations, j==n. It will be set to 0 in the statement j %= n after i has been incremented. In the statement i += j / n, i will be incremented by 0 n-1 times, and at the nth time, it will be incremented by 1. This starts all over again until i >= n.

Related

How to find big-O for these tricky for-loops?

I wanted to confirm if I got the correct big-O for a few snippets of code involving for-loops.
for ( int a = 0; a < n; a ++)
for ( int b = a; b < 16 ; b ++)
sum += 1;
I think this one is O(16N) => O(N), but the fact that b starts at a rather than 0 in the second for-loop is throwing me off.
int b = 0;
for ( int a = 0; a < n ; a ++)
sum += a * n;
for ( ; b < n ; b ++)
sum++;
I want to say O(N^2) since there are nested for-loops where both loops go to n. However, b in the second loop uses the initialization from the outer scope, and I'm not sure if that affects the runtime.
for (int a = 0; a < (n * n); a ++)
sum++;
if (a % 2 == 1)
for (; a < (n * n * n); a ++)
sum++;
I got that the first for-loop is O(N^2) and the one under the if-statement is O(N^3), but I don't know how to account for the if-statement.
The first one is O(n*min(n, 16)) because the 16-for-loop counts as O(1) assuming n > 16. if n < 16 then it's O(n^2).
The second one is O(n) because after the first iteration b is already n.
The third one is O(n^3) because after a maximum of 2 iterations you reach the if statement and then a is incremented until n^3 which means that in the next outer for-loop iteration a<n*n is indeed true so it will exit out of the for loop entirely.
Hopefully this answers your questions, good luck!

How can I find the time complexity for inner loops that depend on "i" from the outer loop:

how can I find the time complexity of inner loops that depend on i from an outer loop such as:
int sum = 0;
for(int i = 0; i < n * n; i++) {
for(int j = n - 1; j >= n - 1 - i; j--) {
sum = i + j;
System.out.println(sum);
}
}
I'm having such a hard time figuring out the time complexity for that. I know that if there is i we use the summation rule but in this case how is that going to be?
The inner loop performs 1 iteration at the start, increasing up to n*n iterations by the end.
i iterations of inner loop
- ------------------------
0 1
1 2
2 3
...
n*n-2 n*n-1
n*n-1 n*n
The total number of iterations of the inner loop is the sum of these:
or
The time complexity is therefore Θ(n4).
The inner loop changes with the outer loop, so it is dependent. You cannot directly find the cost of only the inner loop.
for(int i = 0; i < n * n; i++) { \\ \sum_{i=0}^{n^2}
for(int j = n - 1; j >= n - 1 - i; j--) { \\ \sum_{n-1-i}^{n-1}
sum = i + j; \\ 1
System.out.println(sum);
}
}
So, we have to solve
Note: there was a typo on the Latex image, and it is updated. Thanks to Anatolii

What is the complexity of this sum algorithm?

#include <stdio.h>
int main() {
int N = 8; /* for example */
int sum = 0;
for (int i = 1; i <= N; i++)
for (int j = 1; j <= i*i; j++)
sum++;
printf("Sum = %d\n", sum);
return 0;
}
for each n value (i variable), j values will be n^2. So the complexity will be n . n^2 = n^3. Is that correct?
If problem becomes:
#include <stdio.h>
int main() {
int N = 8; /* for example */
int sum = 0;
for (int i = 1; i <= N; i++)
for (int j = 1; j <= i*i; j++)
for (int k = 1; k <= j*j; k++)
sum++;
printf("Sum = %d\n", sum);
return 0;
}
Then you use existing n^3 . n^2 = n^5 ? Is that correct?
We have i and j < i*i and k < j*j which is x^1 * x^2 * (x^2)^2 = x^3 * x^4 = x^7 by my count.
In particular, since 1 < i < N we have O(N) for the i loop. Since 1 < j <= i^2 <= N^2 we have O(n^2) for the second loop. Extending the logic, we have 1 < k <= j^2 <= (i^2)^2 <= N^4 for the third loop.
Inner to Outer loops, we execute up to N^4 times for each j loop, and up to N^2 times for each i loop, and up to N times over the i loop, making the total be of order N^4 * N^2 * N = N^7 = O(N^7).
I think the complexity is actually O(n^7).
The first loop executes N steps.
The second loop executes N^2 steps.
In the third loop, j*j can reach N^4, so it has O(N^4) complexity.
Overall, N * N^2 * N^4 = O(N^7)
For i = 1 inner loop runs 1^1 times, for i = 2inner loop runs 2^2 times .... and for i = N inner loop runs N^N times. Its complexity is (1^1 + 2^2 + 3^3 + ...... + N^N) of order O(N^3).
In second case, for i = N first inner loop iterates N^N times and hence the second inner loop(inner most) will iterate up to N * (N^N) * (N^N) times. Hence the complexity is of order N * N^2 * N^4, i.e, O(N^7).
Yes. In the first example, the i loop runs N times, and the inner j loop tuns i*i times, which is O(N^2). So the whole thing is O(N^3).
In the second example there is an additional O(N^4) loop (loop to j*j), so it is O(N^5) overall.
For a more formal proof, work out how many times sum++ is executed in terms of N, and look at the highest polynomial order of N. In the first example it will be a(N^3)+b(N^2)+c(N)+d (for some values of a, b, c and d), so the answer is 3.
NB: Edited re example 2 to say it's O(N^4): misread i*i for j*j.
Consider the number of times all loops will be called.
int main() {
int N = 8; /* for example */
int sum = 0;
for (int i = 1; i <= N; i++) /* Called N times */
for (int j = 1; j <= i*i; j++) /* Called N*N times for i=0..N times */
for (int k = 1; k <= j*j; k++) /* Called N^2*N^2 times for j=0..N^2 times and i=0..N times */
sum++;
printf("Sum = %d\n", sum);
return 0;
}
Thus sum++ statement is called O(N^4)*O(N^2)*O(N) times = O(N^7) and this the overall complexity of the program.
The incorrect way to solve this (although common, and often gives the correct answer) is to approximate the average number of iterations of an inner loop with its worst-case. Here, the inner loop loops at worst O(N^4), the middle loop loops at worst O(N^2) times and the outer loop loops O(N) times, giving the (by chance correct) solution of O(N^7) by multiplying these together.
The right way is to work from the inside out, being careful to be explicit about what's being approximated.
The total number of iterations, T, of the increment instruction is the same as your code. Just writing it out:
T = sum(i=1..N)sum(j=1..i^2)sum(k=1..j^2)1.
The innermost sum is just j^2, giving:
T = sum(i=1..N)sum(j=1..i^2)j^2
The sum indexed by j is a sum of squares of consecutive integers. We can calculate that exactly: sum(j=1..n)j^2 is n*(n+1)*(2n+1)/6. Setting n=i^2, we get
T = sum(i=1..N)i^2*(i^2+1)*(2i^2+1)/6
We could continue to compute the exact answer, by using the formula for sums of 6th, 4th and 2nd powers of consecutive integers, but it's a pain, and for complexity we only care about the highest power of i. So we can approximate.
T = sum(i=1..N)(i^6/3 + o(i^5))
We can now use that sum(i=1..N)i^p = Theta(N^{p+1}) to get the final result:
T = Theta(N^7)

cyclic permutation in O(1) space and O(n) time

I saw an interview question which asked to
Interchange arr[i] and i for i=[0,n-1]
EXAMPLE :
input : 1 2 4 5 3 0
answer :5 0 1 4 2 3
explaination : a[1]=2 in input , so a[2]=1 in answer so on
I attempted this but not getting correct answer.
what i am able to do is : for a pair of numbers p and q , a[p]=q and a[q]=p .
any thoughts how to improve it are welcome.
FOR(j,0,n-1)
{
i=j;
do{
temp=a[i];
next=a[temp];
a[temp]=i;
i=next;
}while(i>j);
}
print_array(a,i,n);
It would be easier for me to to understand your answer if it contains a pseudocode with some explaination.
EDIT : I came to knpw it is cyclic permutation so changed the question title.
Below is what I came up with (Java code).
For each value x in a, it sets a[x] to x, and sets x to the overridden value (to be used for a[a[x]]), and repeats until it gets back to the original x.
I use negative values as a flag to indicate that the value's already been processed.
Running time:
Since it only processes each value once, the running time is O(n).
Code:
int[] a = {1,2,4,5,3,0};
for (int i = 0; i < a.length; i++)
{
if (a[i] < 0)
continue;
int j = a[i];
int last = i;
do
{
int temp = a[j];
a[j] = -last-1;
last = j;
j = temp;
}
while (i != j);
a[j] = -last-1;
}
for (int i = 0; i < a.length; i++)
a[i] = -a[i]-1;
System.out.println(Arrays.toString(a));
Here's my suggestion, O(n) time, O(1) space:
void OrderArray(int[] A)
{
int X = A.Max() + 1;
for (int i = 0; i < A.Length; i++)
A[i] *= X;
for (int i = 0; i < A.Length; i++)
A[A[i] / X] += i;
for (int i = 0; i < A.Length; i++)
A[i] = A[i] % X;
}
A short explanation:
We use X as a basic unit for values in the original array (we multiply each value in the original array by X, which is larger than any number in A- basically the length of A + 1). so at any point we can retrieve the number that was in a certain cell of the original array by array by doing A[i] / X, as long as we didn't add more than X to that cell.
This lets us have two layers of values, where A[i] % X represents the value of the cell after the ordering. these two layers don't intersect through the process.
When we finished, we clean A from the original values multiplied by X by performing A[i] = A[i] % X.
Hopes that's clean enough.
Perhaps it is possible by using the images of the input permutation as indices:
void inverse( unsigned int* input, unsigned int* output, unsigned int n )
{
for ( unsigned int i = 0; i < n; i++ )
output[ input[ i ] ] = i;
}

finding the time complexity

I am trying to understand the subtle difference in the complexity of
each of the examples below.
Example A
int sum = 0;
for (int i = 1; i < N; i *= 2)
for (int j = 0; j < N; j++)
sum++;
My Analysis:
The first for loop goes for lg n times.
The inner loop is independent of outer loop and executes N times every time outer loop executes.
So the complexity must be:
n+n+n... lg n times
Therefore the complexity is n lg n.
Is this correct?
Example B
int sum = 0;
for (int i = 1; i < N; i *= 2)
for(int j = 0; j < i; j++)
sum++;
My Analysis:
The first for loop goes for lg n times.
The inner loop execution depends on outer loop.
So how do I calculate the complexity when no of times inner loop executes depends on outer loop?
Example C
int sum = 0;
for (int n = N; n > 0; n /= 2)
for (int i = 0; i < n; i++)
sum++;
I think example C and example B must have same complexity because no of times the inner loop executes depends on outer loop.
Is this correct?
In examples B and C, the inner loop executes 1 + 2 + ... + n/2 + n times. There happen to be lg n terms in this sequence, and that does mean that int i = 0 executes lg n times, however the sum for the statement(s) in the inner loop is 2n. So we get O(n + lg n) = O(n)
(a) Your analysis is correct
(b) The outer loop goes log(N) times. The inner loop goes in the sequence 1, 2, 4, 8, ... for log(N) times which is a geometric series and is equal to (approx) O(2^log(N)) or twice the amount of the highest multiple.
E.g. : 1 + 2 + 4 = (approx)2*4, 1 + 2 + 4 + 8 = (approx)2*8.
Hence the total complexity is O(2^log(N)) = O(N)
(c) This is same as (b) in reverse order
Fine Time complexity
I=1;
K=1;
While(k<n)
{
Stmt;
K=k+i;
I++;
}

Resources