Runtime of an easy while loop - loops

I have a short question about the runtime of a while loop.
I have the given code:
Calculate(int n)
i = n
while(i > 0)
i = i/2
If n is a power of two, how often will the while loop be executed. I am doing revision on something we did at the beginning of the semester and I know it's not hard but I just don't know how the answer. For example if n = 1, the loop would be executed one time, if n = 2, the then loop would be executed 2 times, if n = 4, the loop would be executed 3 times and so on but I am not sure how to formulate that mathematically.

A mathematical formula for this will use the binary logarithm:
log2(n) + 1

Related

time complexity of nested loops - always just a multiplication of each of them seperated?

When looking at this code for example :
for (int i = 1; i < n; i*=2)
for (int j = 0; j < i; j +=2)
{
// some contstant time operations
}
Is it as simple as saying that because the outer loop is log and and inner loop is n , that combined the result is big(O) of nlogn ?
Here is the analysis of the example in the question. For simplicity I will neglect the increment of 2 in the inner loop and will consider it as 1, because in terms of complexity it does not matter - the inner loop is linear in i and the constant factor of 2 does not matter.
So we can notice, that the outer loop is producing is of values which are powers of 2 capped by n, that is:
1, 2, 4, 8, ... , 2^(log2 n)
these numbers are also the numbers that the "constant time operation" in the inner loop is running for each i.
So all we have to do is to sum up the above series. It is easy to see that these are geometric series:
2^0 + 2^1 + 2^2 + ... + 2^(log2 n)
and it has a well known solution:
(from Wiki )
We have a=1, r=2, and... well n_from_the_image =log n. We have a same name for different variables here, so it is a bit of a problem.
Now let's substitute and get that the sum equals
(1-2^((log2 n) + 1) / (1 - 2) = (1 - 2*n) / (1-2) = 2n-1
Which is a linear O(n) complexity.
Generally, we take the O time complexity to be the number of times the innermost loop is executed (and here we assume the innermost loop consists of statements of O(1) time complexity).
Consider your example. The first loop executes O(log N) times, and the second innermost loop executes O(N) times. If something O(N) is being executed O(log N) times, then yes, the final time complexity is just them multiplied: O(N log N).
Generally, this holds true with most nested loops: you can assume their big-O time complexity to be the time complexity of each loop, multiplied.
However, there are exceptions to this rule when you can consider the break statement. If the loop has the possibility of breaking out early, the time complexity will be different.
Take a look at this example I just came up with:
for(int i = 1; i <= n; ++i) {
int x = i;
while(true) {
x = x/2;
if(x == 0) break;
}
}
Well, the innermost loop is O(infinity), so can we say that the total time complexity is O(N) * O(infinity) = O(infinity)? No. In this case we know the innermost loop will always break in O(log N), giving a total O(N log N) time complexity.

A probability theory problem in skiplist's C implement

These days I am looking at skiplist code in Algorithms in C, Parts 1-4, and when insert a new value into skiplist is more complex than I think. During insert, code should ensure that the new value insert into level i with the probability of 1/2^i, and this implement by below code:
static int Rand()
{
int i, j = 0;
uint32_t t = rand();
for (i = 1, j = 2; i < lg_n_max; i ++, j += j)
if (t > RANDMAX / j)
break;
if (i > lg_n)
lg_n = i;
return i;
}
I don't know how the Rand function ensure this, can you explain this for me, thank you.
Presumably RANDMAX is intended to be RAND_MAX.
Neglecting rounding issues, half the return values of rand are above RAND_MAX / 2, and therefore half the time, the loop exits with i = 1.
If the loop continues, it updates i to 2 and j to 4. Then half the remaining return values (¾ of the total) are above RAND_MAX / 4, so, one-quarter of the time, the loop exits with i = 2.
Further iterations continue in the same manner, each iteration exiting with a portion of return values that is half the previous, until the lg_n_max limit is reached.
Thus, neglecting rounding issues and the final limit, the routine returns 1 half the time, 2 one-quarter of the time, 3 one-eighth the time, and so on.
lg_n is not defined in the routine. It appears to be a record of the greatest value returned by the routine so far.
Thanks Eric Postpischil very much for his answer, I have understand how to ensure the probability. And I have a more understood answer:
The t is a random value between 0 and RANDMAX, and we assume that the loop will run 2 times. In the first loop, value of t is smaller than RANDMAX/2^1, means that value of t fall into the range from 0 to RANDMAX/2 , the probability of this is 1/2. In the second loop, remember the fact that value of t is in the range of (0, RANDMAX/2^i), value of t is smaller that RANDMAX/2^2, means that value of t fall into the range from 0 to RANDMAX/2^2, the probability of this is also 1/2, because the range of (0, RANDMAX/2^2) is only 1/2 of the range in first loop, and the first loop show value of t is in the range of (0, RANDMAX/2^1). And notice that the probability of second loop is conditional probability,it‘s based on the probability of first loop, so the probability of second loop is 1/2*1/2=1/4.
In a short, every loop will bring a * 1/2 to last loop's probability.

What is the time complexity of the following dependent loops?

I have a question that needs answer before an exam I'm supposed to have this week.
i = 1;
while (i <= n)
{
for (j = 1; j < i; j++)
printf("*");
j *= 2;
i *= 3;
}
I have those dependent loops, I calculated the outer loop's big O to be O(logn).
The inner loop goes from 1 to i - 1 for every iteration of the outer loop,
the problem I'm having with this is that I do not know how calculate the inner loop's time complexity, and then the overall complexity (I'm used to just multiplying both complexities but I'm not sure about this one)
Thanks a lot!
P.S: I know that the j *= 2 doesn't affect the for loop.
As you recognized, computing the complexity of a loop nest where the bounds of an inner loop vary for different iterations of the outer loop is not as easy a simple multiplication of two iteration counts. You need to look more deeply to get the tightest possible bound.
The question can be taken to be asking about how many times the body of the inner loop is executed, as a function of n. On the first outer-loop iteration, i is 1, so j is never less than i, so there are no inner-loop iterations. Next, i is 3, so there are two inner-loop iterations, then eight the next time, then 26 ... in short, 3i-1 - 1 inner-loop iterations. You need to add those all up to compute the overall complexity.
Well, that sum is Σi = 1, floor(log n) (3i-1 - 1), so you could say that the complexity of the loop nest is
O(Σi = 1, floor(log n) (3i-1 - 1))
, but such an answer is unlikely to get you full marks.
We can simplify that by observing that our sum is bounded by a related one:
= O(Σi = 1, floor(log n) (3i-1))
. At this point (if not sooner) it would be useful to recognize the sum of powers pattern therein. It is often useful to know that 20 + 21 + ... 2k - 1 = 2k - 1. This is closely related to base-2 numeric representations, and a similar formula can be written for any other natural number base. For example, for base 3, it is 2 * 30 + 2 * 31 + ... 2 * 3k - 1 = 3k - 1. This might be enough for you to intuit the answer: that the total number of inner-loop iterations is bounded by a constant multiple of the number of inner-loop iterations on the last iteration of the outer loop, which in turn is bounded by n.
But if you want to prove it, then you can observe that the sum in the previous bound expression is itself bounded by a related definite integral:
= O(∫0log n 3i di)
... and that has a closed-form solution:
= O((3log n - 30) / log 3)
, which clearly has a simpler bound itself
= O(3log n)
. Exponentials of logarithms reduce to linear functions of the logarithm argument. Since we need only an asymptotic bound, we don't care about the details, and thus we can go straight to
= O(n)

Finding the Complexity of Nested Loops

I'm given the loop pseudocode:
where "to" is equivalent to "<="
sum = 0;
for i = 1 to n
for j = 1 to i^3
for k = 1 to j
sum++
I know the outermost loop runs n times.
Do the two inner loops also run n times though? (Making the entire Complexity O(n^3).
Where for instance n = 5
Then:
1 <= 5 2<= 5
j = 1 <= 1^3 2 <= 2^3 = 8
k=1 <= 1 2 <= 2
And this would continue n times for each loop, making it n^3?
This seems like a tricky problem, those inner loops are more complex than just n.
The outer loop is n.
The next loop goes to i^3. At the end of the outer loop i will be equal to n. This means that this loop at the end will be at n^3. Technically it would be (n^3)/2, but we ignore that since this is Big O.
The third loop goes to j, but at the end of the previous loop j will be equal to i^3. And we already determined that i^3 was equal to n^3.
So it looks like:
1st loop: n
2nd loop: n^3
3rd loop: n^3
Which looks like it comes to n^7. I'd want someone else to verify this though. Gotta love Big O.
You can use Sigma notation to explicitly unroll the number of basic operations in the loop (let sum++ be a basic operation):
Where
(i): Partial sum formula from Wolfram Alpha.
(ii): Expanding the expression from Wolfram Alpha.
Hence, the complexity, using Big-O notation, is O(n^7).

What is complexity of following code?

Find the time complexity of following code.
Answer given is O(log(n)*n^1/2), but I am not getting it.
I want someone to explain this.
i=n;
while(i>0)
{
k=1;
for(j=1;j<=n;j+=k)
k++;
i=i/2;
}
Take this code segment:
k=1;
for(j=1;j<=n;j+=k)
k++;
The values of j over various iterations will be 1, 3, 6, 10, 15, 21, 28, ....
Note that these numbers have closed form (m+1)(m+2)/2, where m is the number of iterations that have gone by. If we want to know how many iterations this loop will run for, we need to solve (m+1)(m+2)/2 = n, which has solution m = (sqrt(8n + 1) - 3))/2 = O(sqrt(n)). So this loop will run O(sqrt(n)) times.
The outer loop will run O(log(n)) times (this is rather easy to see). So overall, we have O(log(n)sqrt(n)).
edit: Or perhaps easier than solving (m+1)(m+2)/2 = n directly would simply be to note that (m+1)(m+2)/2 = O(m^2), and so O(m^2) = n implies m = O(sqrt(n)).
The complexity would be :
(log n + 1)*(-1 + squareroot(1+4n))/2 = O(squareroot(n)*log n)
log n is in base 2.
Suppose n is 36.
The outer loop will iterate for log n + 1 times because the value is halved every time 36,18,9,4,2,1.
The inner loop has j values = 1,3,6,10,15,21,28,36.Every j value can be calculated as the sum of terms in AP 1+2+3+4+5....w = w(w+1)/2. So w(w+1)/2 = n.Solving this quadratic equation we get w=(-1+sqrt(1+4n))/2 i.e the number of iterations of inner loop.
For n=36, w=8.
Total complexity thus comes out to be : log n * sqrt(n).

Resources