complexity for a nested loop with varying internal loop - c

Very similar complexity examples. I am trying to understand as to how these questions vary. Exam coming up tomorrow :( Any shortcuts for find the complexities here.
CASE 1:
void doit(int N) {
while (N) {
for (int j = 0; j < N; j += 1) {}
N = N / 2;
}
}
CASE 2:
void doit(int N) {
while (N) {
for (int j = 0; j < N; j *= 4) {}
N = N / 2;
}
}
CASE 3:
void doit(int N) {
while (N) {
for (int j = 0; j < N; j *= 2) {}
N = N / 2;
}
}
Thank you so much!

void doit(int N) {
while (N) {
for (int j = 0; j < N; j += 1) {}
N = N / 2;
}
}
To find the O() of this, notice that we are dividing N by 2 each iteration. So, (not to insult your intelligence, but for completeness) the final non-zero iteration through the loop we will have N=1. The time before that we will have N=a(2), then before that N=a(4)... where 0< a < N (note those are non-inclusive bounds). So, this loop will execute a total of log(N) times, meaning the first iteration we see that N=a2^(floor(log(N))).
Why do we care about that? Well, it's a geometric series which has a nice closed form:
Sum = \sum_{k=0}^{\log(N)} a2^k = a*\frac{1-2^{\log N +1}}{1-2} = 2aN-a = O(N).
If someone can figure out how to get that latexy notation to display correctly for me I would really appreciate it.

You already have the answer to number 1 - O(n), as given by #NickO, here is an alternative explanation.
Denote the number of outer repeats of inner loop by T(N), and let the number of outer loops be h. Note that h = log_2(N)
T(N) = N + N/2 + ... + N / (2^i) + ... + 2 + 1
< 2N (sum of geometric series)
in O(N)
Number 3: is O((logN)^2)
Denote the number of outer repeats of inner loop by T(N), and let the number of outer loops be h. Note that h = log_2(N)
T(N) = log(N) + log(N/2) + log(N/4) + ... + log(1) (because log(a*b) = log(a) + log(b)
= log(N * (N/2) * (N/4) * ... * 1)
= log(N^h * (1 * 1/2 * 1/4 * .... * 1/N))
= log(N^h) + log(1 * 1/2 * 1/4 * .... * 1/N) (because log(a*b) = log(a) + log(b))
< log(N^h) + log(1)
= log(N^h) (log(1) = 0)
= h * log(N) (log(a^b) = b*log(a))
= (log(N))^2 (because h=log_2(N))
Number 2 is almost identical to number 3.
(In 2,3: assuming j starts from 1, not from 0, if this is not the case #WhozCraig giving the reason why it never breaks)

Related

The time complexity answer to the question confuses me - n^(2/3)

I'm trying to figure out why the time complexity of this code is n2/3. The space complexity is log n, but I don't know how to continue the time complexity calculation (or if it's right).
int g2 (int n, int m)
{
if (m >= n)
{
for (int i = 0; i < n; ++i)
printf("#");
return 1;
}
return 1 + g2 (n / 2, 4 * m);
}
int main (int n)
{
return g2 (n, 1);
}
As long as m < n, you perform an O(1) operation: making a recursive call. You halve n and quadruple m, so after k steps, you get
n(k) = n(0) * 0.5^k
m(k) = m(0) * 4^k
You can set them equal to each other to find that
n(0) / m(0) = 8^k
Taking the log
log(n(0)) - log(m(0)) = k log(8)
or
k = log_8(n(0)) - log_8(m(0))
On the kth recursion you perform n(k) loop iterations.
You can plug k back into n(k) = n(0) * 0.5^k to estimate the number of iterations. Let's ignore m(0) for now:
n(k) = n(0) * 0.5^log_8(n(0))
Taking again the log of both sides,
log_8(n(k)) = log_8(n(0)) + log_8(0.5) * log_8(n(0))
Since log_8(0.5) = -1/3, you get
log_8(n(k)) = log_8(n(0)) * (2/3)`
Taking the exponent again:
n(k) = n(0)^(2/3)
Since any positive exponent will overwhelm the O(log(n)) recursion, your final complexity is indeed O(n^(2/3)).
Let's look for a moment what happens if m(0) > 1.
n(k) = n(0) * 0.5^(log_8(n(0)) - log_8(m(0)))
Again taking the log:
log_8(n(k)) = log_8(n(0)) - 1/3 * (log_8(n(0)) - log_8(m(0)))
log_8(n(k)) = log_8(n(0)^(2/3)) + log_8(m(0)^(1/3))
So you get
n(k) = n(0)^(2/3) * m(0)^(1/3)
Or
n(k) = (m n^2)^(1/3)
Quick note on corner cases in the starting conditions:
For m > 0:
If n <= 0:, n <= m is immediately true and the recursion terminates and there is no loop.
For m < 0:
If n <= m, the recursion terminates immediately and there is no loop. If n > m, n will converge to zero while m diverges, and the algorithm will run forever.
The only interesting case is where m == 0. Regardless of whether n is positive or negative, it will reach zero because of integer truncation, so the complexity depends on when it reaches 1:
n(0) * 0.5^k = 1
log_2(n(0)) - k = 0
So in this case, the runtime of the recursion is still O(log(n)). The loop does not run.
m starts at 1, and at each step n -> n/2 and m -> m*4 until m>n. After k steps, n_final = n/2^k and m_final = 4^k. So the final value of k is where n/2^k = 4^k, or k = log8(n).
When this is reached, the inner loop performs n_final (approximately equal to m_final) steps, leading to a complexity of O(4^k) = O(4^log8(n)) = O(4^(log4(n)/log4(8))) = O(n^(1/log4(8))) = O(n^(2/3)).

Finding out complexity of a program when we use while loop

What will be the time complexity for the following code?
int fun1(int n) {
int i = 1;
int count = 0;
while (i < n) {
count++;
i = i * 2;
}
printf("Loop ran %d times\n", count);
return 0;
}
All sentences are O(1) and the loop does log(n) (base 2) iterations as i doubles itselves (i=i*2) every iteration, so its log(n) (base 2).
You can find more information here What is time complexity of while loops?.
The time complexity of the above code is : O(log(n))
int fun1(int n) {
int i = 1;
int count = 0;
// Here i runs from 1 to n
// but i doubles every time
// i = 1 2 4 8 16 .... n
// Hence O(log(n))
while (i < n) {
count++;
i = i * 2;
}
printf("Loop ran %d times\n", count);
return 0;
}
Suppose n = 16 == 2^4
In that case the loop will run only 4 time == 1 2 4 8 == log(16)
Look at this part of your code:
while (i < n) {
count++;
i = i * 2;
}
i is multiplied by 2 in every iteration.
Initially, i is 1.
Iteration I:
i = 1 * 2; => i = 2
Iteration II:
i = 2 * 2; => i = 4
Iteration III:
i = 4 * 2; => i = 8
Iteration IV:
i = 8 * 2; => i = 16
.....
.....
and so on..
Assuming n is a number which is equal to 2k. Which means, loop will execute k times. At kth step:
2k = n
Taking logarithms (base 2) on both side:
log(2k) = log(n)
k log(2) = log(n)
k = log(n) [as log2(base 2) = 1]
Hence, time complexity is O(log(n)).

Work out the overall progress (%) of three nested for loops

I have three nested for loops, each of which obviously have a limit. To calculate the progress of any one of the three for loops, all that I need to do is to divide the current iteration by the total number of iterations that the loop will make. However, given that there are three different for loops, how can I work out the overall percentage complete?
int iLimit = 10, jLimit = 24, kLimit = 37;
for (int i = 0; i < iLimit; i++) {
for (int j = 0; j < jLimit; j++) {
for (int k = 0; k < kLimit; k++) {
printf("Percentage Complete = %d", percentage);
}
}
}
I tried the following code, but it reset after the completion of each loop, reaching a percentage greater than 100.
float percentage = ((i + 1) / (float)iLimit) * ((j + 1) / (float)jLimit) * ((k + 1) / (float)kLimit) * 100;
You can easily calculate the "change in percentage per inner cycle"
const double percentChange = 1.0 / iLimit / jLimit / kLimit;
Note, this mathematically equivalent to 1/(iLimit*jLimit*kLimit), however if iLimitjLimitkLimit is sufficiently large, you'll have an overflow and unexpected behavior. It's still possible to have an underflow with the 1.0/... approach, but its far less likely.
int iLimit = 10, jLimit = 24, kLimit = 37;
const double percentChange = 1.0 / iLimit / jLimit / kLimit;
double percentage = 0;
for (int i = 0; i < iLimit; i++) {
for (int j = 0; j < jLimit; j++) {
for (int k = 0; k < kLimit; k++) {
percentage += percentChange;
printf("Percentage Complete = %d\n", (int)(percentage * 100));
}
}
}
If I do understand your question right, then I think the counter variables at each level (i.e. i, j, k) should have a different weightage in the %age formula. Let me explain what I mean: Each increment of j corresponds to kLimit iterations of the innermost loop. So, if you have only one level of nesting (say the outermost loop using i is not present), total number of loop iterations would be kLimit*jLimit and the percentage:
percentage = (100.0 * (j*kLimit + k + 1)) / (float)(kLimit*jLimit)
You got the idea? Its very easy to generalize this concept to the required level of nesting. I hope you can very well figure out the needed equation for your case. Anyways here is the final formula:
percentage = 100.0 * (kLimit * (i * jLimit + j) + k + 1) / (iLimit * jLimit * kLimit)
The total number of loops is iLimit * jLimit * kLimit, and so if you have an incrementing percentage in the inner loop, you can just print
100 * percentage / (iLimit * jLimit * kLimit)
Since you are using %d to print the percentage, you can limit everything to integer calculations. (And it avoids seeing meaningless 'exact' values such as 0.011261 for the first step.)
If you want to see properly rounded values, you can also use this:
printf("Percentage Complete = %d%%\r", (counter*200+iLimit * jLimit * kLimit) /
(2 * iLimit * jLimit * kLimit));
The \r at the end is a small refinement so each line will overprint the previous one.
Try this one:
int iLimit = 10, jLimit = 24, kLimit = 37;
float percentage;
for (int i = 0; i < iLimit; i++) {
for (int j = 0; j < jLimit; j++) {
for (int k = 0; k < kLimit; k++) {
percentage = ((k+1) + j * kLimit + i*jLimit*kLimit)/(float)(iLimit*jLimit*kLimit) * 100;
printf("Percentage Complete = %f\n", percentage);
}
}
}
This solution is very simmilar to the counter incrementation solution posted here.
The advantage for this solution is that I supplied a formula for the counter which depends on the i,j,k and the limits iLimit, jLimit, kLimit:
counter = (k+1) + j * kLimit + i*jLimit*kLimit
This way, you can find out the percentage when you know i, j, k without iterating through the loops.
Thus you can possibly reduce a O(iLimit * jLimit * kLimit) problem to a O(1) problem.
Remember that percentage is parts of 100.
To get 100 you need to do e.g. (iLimit * jLimit * kLimit) / (iLimit * jLimit * kLimit) * 100.
Each iteration of the loop takes 1 / (iLimit * jLimit * kLimit) parts of the whole.
To get the percentage, "simply" do e.g.
float percentage = ++counter / (float) (iLimit * jLimit * kLimit) * 100.0;
Remember to declare and initialize the counter variable before the loops.

How is T(n) of the code O(nlog(n))? [duplicate]

This question already has answers here:
Asymptotic analysis
(2 answers)
Closed 8 years ago.
I dont get the part where T(n) of the second for loop is log(n). Both loops are connected by
i and it is confusing.How is T(n) of the code O(nlog(n)) using fundamental product rule?
for(i = 1; i <= n; i++)
{
for(j = 1; j <= n; j = j + i)
{
printf("Hi");
}
}
For i=1 inner loop executes n times. For i=2 inner loop executes n/2 times and so on. The running time is T(n) = n + n/2 + n/3 + ... + n/n. This is equal to n(1 + 1/2 + 1/3 + ...+ 1/n) = nH(n) where H(n) is the nth harmonic number. H(n) ~ lg n hence the running time of O(n lg n).
for(i = 1; i <= n; i++) // Executes n times
{
for(j = 1; j <= n; j = j + i)
{ // This loop executes j times with j increases by rate of i
printf(“Hi”);
}
}
The inner loop executes n/i times for each value of i. Its running time is nxSum(n/i) for all i in [1,n]
=> O(nlogn)

Finding pow(a^b)modN for a range of a's

For a given b and N and a range of a say (0...n),
I need to find ans(0...n-1)
where,
ans[i] = no of a's for which pow(a, b)modN == i
What I am searching here is a possible repetition in pow(a,b)modN for a range of a, to reduce computation time.
Example:-
if b = 2 N = 3 and n = 5
for a in (0...4):
A[pow(a,b)modN]++;
so that would be
pow(0,2)mod3 = 0
pow(1,2)mod3 = 1
pow(2,2)mod3 = 1
pow(3,2)mod3 = 0
pow(4,2)mod3 = 1
so the final results would be:
ans[0] = 2 // no of times we have found 0 as answer .
ans[1] = 3
...
Your algorithm have a complexity of O(n).
Meaning it take a lot of time when n gets bigger.
You could have the same result with an algorithm O(N).
As N << n it will reduce your computation time.
Firts, two math facts :
pow(a,b) modulo N == pow (a modulo N,b) modulo N
and
if (i < n modulo N)
ans[i] = (n div N) + 1
else if (i < N)
ans[i] = (n div N)
else
ans[i] = 0
So a solution to your problem is to fill your result array with the following loop :
int nModN = n % N;
int nDivN = n / N;
for (int i = 0; i < N; i++)
{
if (i < nModN)
ans[pow(i,b) % N] += nDivN + 1;
else
ans[pow(i,b) % N] += nDivN;
}
You could calculate pow for primes only, and use pow(a*b,n) == pow(a,n)*pow(b,n).
So if pow(2,2) mod 3 == 1 and pow(3,2) mod 3 == 2, then pow(6,2) mod 3 == 2.

Resources