Time Complexity of an Algorithm (Nested Loops) - c

I'm trying to figure out the time complexity of this pseudocode given algorithm:
sum = 0;
for (i = 1; i <= n; i++)
for (j = 1; j <= n / 6; j++)
sum = sum + 1;
I know that the first line runs
n times
But I'm not sure about the second line.

Using Sigma notation, we can find the asymptotic bounds of your algorithm as follows:

Here you have a simple double loop:
for i=1;i<=n;i++
for j=1; j<=n/6; j++
so if you count how many times the body of the loop will be executed (i.e. how many times this line of code sum = sum + 1; will be executed), you will see it's:
n*n/6 = n²/6
which in terms of big-O notation is:
O(n²)
because we do not really care for the constant term, because as n grows, the constant term makes no (big) difference if it's there or not!
When and only when you fully realize what I am saying, you can go deeper with this nice question: Big O, how do you calculate/approximate it?
However, please notice that such questions are more appropriate for the Theoretical Computer Science, rather than SO.

You make n*n/6 operations, thus, the time complexity is O(n^2/6) = O(n^2).

Related

Big O - Why is this algorithm O(AxB)?

I am unsure why this code evaluates to O(A*B)?
void printUnorderedPairs(int[] arrayA, int[] arrayB) {
for (int i= 0; i < arrayA.length; i++) {
for (int j = 0; j < arrayB.length; j++) {
for (int k= 0; k < 100000; k++) {
System.out.println(arrayA[i] + "," + arrayB[j]);
}
}
}
}
Sure, more precisely its O(1000*AB) and we would drop the 1000 making it O(AB). But what if array A had a length of 2? wouldn't the 1000 iterations be more significant? Is it just because we know the final loop is constant (and its value is shown) that we don't count it? what if we knew all of the arrays sizes?
Can anyone explain why we would not say its O(ABC)? What would be the runtime if I made the code this:
int[] arrayA = new int[20];
int[] arrayB = new int[500];
int[] arrayC = new int[100000];
void printUnorderedPairs(int[] arrayA, int[] arrayB) {
for (int i= 0; i < arrayA.length; i++) {
for (int j = 0; j < arrayB.length; j++) {
for (int k= 0; k < arrayC.length; k++) {
System.out.println(arrayA[i] + "," + arrayB[j]);
}
}
}
}
If the running time (or number of execution steps, or number of times println gets called, or whatever you are assessing with your Big O notation) is O(AB), it means that the running time approaches being linearly proportional to AB as AB grows without bound (approaches infinity). It is literally a limit to infinity, in terms of calculus.
Big O is not concerned with what happens for any finite number of iterations. It's about what the limiting behaviour of the function is as its free variables approach infinity. Sure, for small values of A there could very well be a constant term that dominates execution time. But as A approaches infinity, all those other factors becomes insignificant.
Consider a polynomial like Ax^3 + Bx^2 + Cn + D. It will be proportional to x^3 as x grows to infinity - regardless of the magnitude of A, B, C, or D. B can be Grahams number for all Big O cares; infinity is still way bigger than any big finite number you pick and therefore the x^3 term dominates.
So first, considering what if A were 2 is not really in the spirit of AB approaching infinity. Any number you can fit on a whiteboard basically rounds down to zero..
And second, remember that proportional to AB means equal to AB times some constant; and it doesn't matter what that constant is. It is fine if the constant happens to be 10000. Saying something is proportional to 2N is the same as saying it is proportional to N, or any other number times N. So O(2N) is the same as O(N). By convention we always simplify when using Big-O notation to drop any constant factors. So we would always write O(N), and never O(2N). And for that same reason, we would write O(AB) and not O(10000AB).
And finally we don't say O(ABC) only because "C" (the number of iterations of your inner loop in your question) happens to be a constant; which also happens to equal 10000. That's why we say it's O(AB) and not O(ABC) because C is not a free variable; it's hard-coded to 10000. If the size of B were not expected to change (were to be constant for whatever reason) then you could say that it is simply O(A). But if you allow B to grow without bound, then the limit is O(AB) and if you also allow C to grow without bound then the limit is O(ABC). You get to decide which numbers are constant and which variables are free variables depending on the context of your analysis.
You can read more about Big O notation at Wikipedia.
Appreciate that the for loops in i and j are independent of each other, so their running time is O(A*B). The inner loop in k is a fixed number of iterations, 100000, and also is independent of the two outer loops, so we get O(100000*A*B). But, since the k loop is just a constant (non variable) penalty, with are still left with O(A*B) for the overall complexity.
If you were to write the inner loop in k from 0 to C, then you could write O(A*B*C) for the complexity, and that would be valid as well.
Generally the A*B doesn't matter, and it's just considered O(N).
If there was some knowledge that A and B were always somewhat the same length, then one could argue that it's really O(N^2).
Any sort of constant doesn't really matter in order-notation, because for really really large numbers of A/B, the constant becomes of negligible importance.
void printUnorderedPairs(int[] arrayA, int[] arrayB) {
for (int i= 0; i < arrayA.length; i++) {
for (int j = 0; j < arrayB.length; j++) {
for (int k= 0; k < 100000; k++) {
System.out.println(arrayA[i] + "," + arrayB[j]);
}
}
}
}
This code is evaluated to O(AB), because arrayC has constant length. Of course, its run time is proportional to AB*100000. Here, we never care about constant values, because when the variables get higher and higher like 10^10000, the constants can be easily ignored.
In the second code, we say its O(1), because all arrays have constant length and we can calculate its run time without any variable.

How is this loop's time complexity O(n^2)?

How is this loop's time complexity O(n^2)?
for (int i = n; i > 0; i -= c)
{
for (int j = i+1; j <=n; j += c)
{
// some O(1) expressions
}
}
Can anyone explain?
Assumption
n > 0
c > 0
First loop
The first loop start with i=n and at each step, it substracts c from i. On one hand, if c is big, then the first loop will be iterated only a few times. (Try with n=50, c=20, you will see). On the other hand, if c is small (let say c=1), then it will iterate n times.
Second loop
The second loop is the same reasoning. If c is big, then it will be iterated only a few times, if c is small, many times and at the worst case n times.
Combined / Big O
Big O notation gives you the upper bound for time complexity of an algorithm. In your case, first and second loop upper bound combined, it gives you a O(n*n)=O(n^2).

Confused with Big O Notation

So I get that the first for loop runs O(n) times, then inside that it runs 3 times, then 3 times again. How do I express this at big O notation though? Then do the 2 print statements matter? How do I add them to my big-o expression? Thanks, really confused and appreciate any help.
for (int x = 0; x < n; x++) {
for (int j = 0; j < 3; j++) {
for (int k = 0; k < 3; k++) {
printf("%d", arr[x]);
}
printf("\n");
}
}
O(n) is linear time, so any k * O(n) where k is a constant (like in your example) is also linear time and is just expressed as O(n). Your example has O(n) time complexity.
Big O notation are always defined as a function of input size - n. Big O gives the upper limit of total time taken to run that module. Because your inner "for" loops are always run 3*3 =9 times irrespective of the input size of n - there are still considered as constant time in Big O calculations
Time Complexity = O(n+9+constantTimeToPrint) = O(n)
The two inner loops are constant, so it's still O(n). constant factors don't matter, the runtime varies only with the input size.

Given an Array A[1..n] as input,fill in a second Array B[1..n] such that B[i] = A[1] + ... + A[i]

The running time should be O(n), this is what I came up with, is it correct? THANKS
for (i = 1 ; i < A[n]; i++)
A[i] = 0
B[i] = 0;
for i in A[1..n]
B[i] = B[i] + A[i]
Yes, this is correct. Your solution is the most trivial case of dynamic programming, a technique to greatly speed up algorithms for solutions that can be constructed from solutions from smaller sub-problems.
Your problem at hand has precisely this property: a solution to B[n] can be constructed in O(1) if you are given a solution to B[n-1], and that is what your algorithm did, for a running time of O(N).
Solutions like that are very helpful in practice: I once used an algorithm like yours to speed up a piece of start-up code in my program from several minutes to several seconds (it was adding vectors, so it went from O(3) to O(2)).

Finding Big-O with multiple nested loops?

int num = n/4;
for (int i = 1; i <= num; i++) {
for (int j = 1; j <= n; j++) {
for (int k = 1; k <= n; k++) {
int count = 1;
}
}
}
According to the books I have read, this code should be O((n^3)/4). But apparently its not. to find the Big-O for nested loops are you supposed to multiply the bounds? So this one should be num *n *n or n/4 *n *n.
O((n^3)/4) makes no sense in terms of big-O notation since it's meant to measure the complexity as a ratio of the argument. Dividing by 4 has no effect since that changes the value of the ratio but not its nature.
All of these are equivalent:
O(n^3)
O(n^3/4)
O(n^3*1e6)
Other terms only make sense when they include an n term, such as:
O(n^3 / log(n))
O(n^3 * 10^n)
As Anthony Kanago rightly points out, it's convention to:
only keep the term with the highest growth rate for sums: O(n^2+n) = O(n^2).
get rid of constants for products: O(n^2/4) = O(n^2).
As an aside, I don't always agree with that first rule in all cases. It's a good rule for deciding the maximal growth rate of a function but, for things like algorithm comparison(a) where you can intelligently put a limit on the input parameter, something like O(n^4+n^3+n^2+n) is markedly worse than just O(n^4).
In that case, any term that depends on the input parameter should be included. In fact, even constant terms may be useful there. Compare for example O(n+1e100) against O(n^2) - the latter will outperform the former for quite a while, until n becomes large enough to have an effect on the constatnt term.
(a) There are, of course, those who would say it shouldn't be used in such a way but pragmatism often overcomes dogmatism in the real world :-)
From http://en.wikipedia.org/wiki/Big_O_notation you can see that constants like the 1/4 do not play a role for determining the Big-O notation. The only interesting fact is that it is n^3, thus O(N^3).
Formally, the time complexity can be deduced like the following:
A small technicality. Big O notation is intended to describe complexity in terms of the 'size' of the input, not the numeric value. If your input is a number, then the size of the input is the number of digits of your number. Alas, your algorithm is O(2^N^3) with N being the number of digits.
More on this topic

Resources