Time complexity of an algorithm that consists a function in disguise - loops

Def Function1(arr):
for i in range(len(arr)):
If function2(arr,arr[i]):
a+arr[i]
return a
Def Function2(arr,i):
for i in range(len(arr)):
If arr[i]==i:
return True
return False
is the time complexity of this algorithm n^2 as the for loop for Function1 executes for n times and in each iteration function2 is executed when the if statements executes where another for loop runs for n times in the worst case. Hence n*n=n^2

Your first loop iterates |arr| times (meaning the length of arr), and the second one |A| times. The time complexity would therefore be O(|arr| * |A|).
If these arrays have the same length n, then yes it would be O(n2). If they depend on each other in some other manner, you can also simplify and express it in terms of one variable n. If you have no such information, then the best you can do is to express the time complexity using both variables.

Related

what would be the time complexity of this algorithm?

i was wondering what would be the time complexity of this piece of code?
last = 0
ans = 0
array = [1,2,3,3,3,4,5,6]
for number in array:
if number != last then: ans++;
last = number
return ans
im thinking O(n^2) as we look at all the array elements twice, once in executing the for loop and then another time when comparing the two subsequent values, but I am not sure if my guess is correct.
While processing each array element, you just make one comparison, based on which you update ans and last. The complexity of the algorithm stands at O(n), and not O(n^2).
The answer is actually O(1) for this case, and I will explain why after explaining why a similar algorithm would be O(n) and not O(n^2).
Take a look at the following example:
def do_something(array):
for number in array:
if number != last then: ans++;
last = number
return ans
We go through each item in the array once, and do two operations with it.
The rule for time complexity is you take the largest component and remove a factor.
if we actually wanted to calculate the exact number of operaitons, you might try something like:
for number in array:
if number != last then: ans++; # n times
last = number # n times
return ans # 1 time
# total number of instructions = 2 * n + 1
Now, Python is a high level language so some of these operations are actually multiple operations put together, so that instruction count is not accurate. Instead, when discussing complexity we just take the largest contributing term (2 * n) and remove the coefficient to get (n). big-O is used when discussing worst case, so we call this O(n).
I think your confused because the algorithm you provided looks at two numbers at a time. the distinction you need to understand is that your code only "looks at 2 numbers at a time, once for each item in the array". It does not look at every possible pair of numbers in the array. Even if your code looked at half of the number of possible pairs, this would still be O(n^2) because the 1/2 term would be excluded.
Consider this code that does, here is an example of an O(n^2) algorithm.
for n1 in array:
for n2 in array:
print(n1 + n2)
In this example, we are looking at each pair of numbers. How many pairs are there? There are n^2 pairs of numbers. Contrast this with your question, we look at each number individually, and compare with last. How many pairs of number and last are there? At worst, 2 * n, which we call O(n).
I hope this clears up why this would be O(n) and not O(n^2). However, as I said at the beginning of my answer this is actually O(1). This is because the length of the array is specifically 8, and not some arbitrary length n. Every time you execute this code it will take the same amount of time, it doesn't vary with anything and so there is no n. n in my example was the length of the array, but there is no such length term provided in your example.

c loop function computing time complexity

I am learning to compute the time complexity of algorithms.
Simple loops and nested loops can be compute but how can I compute if there are assignments inside the loop.
For example :
void f(int n){
int count=0;
for(int i=2;i<=n;i++){
if(i%2==0){
count++;
}
else{
i=(i-1)*i;
}
}
}
i = (i-1)*i affects how many times the loop will run. How can I compute the time complexity of this function?
As i * (i-1) is even all the time ((i * (i-1)) % 2 == 0), if the else part will be true for one time in the loop, i++ makes the i odd number. As result, after the first odd i in the loop, always the condition goes inside the else part.
Therefore, as after the first iteration, i will be equal to 3 which is odd and goes inside the else part, i will be increased by i * (i-1) +‌ 1 in each iteration. Hence, if we denote the time complexity of the loop by T(n), we can write asymptotically: T(n) = T(\sqrt(n)) + 1. So, if n = 2^{2^k}, T(n) = k = log(log(n)).
There is no general rule to calculate the time complexity for such algorithms. You have to use your knowledge of mathematics to get the complexity.
For this particular algorithm, I would approach it like this.
Since initially i=2 and it is even, let's ignore that first iteration.
So I am only considering from i=3. From there I will always be odd.
Your expression i = (i-1)*i along with the i++ in the for loop finally evaluates to i = (i-1)*i+1
If you consider i=3 as 1st iteration and i(j) is the value of i in the jth iteration, then i(1)=3.
Also
i(j) = [i(j-1)]^2 - i(j-1) + 1
The above equation is called a recurrence relation and there are standard mathematical ways to solve it and get the value of i as a function of j. Sometimes it is possible to get and sometimes it might be very difficult or impossible. Frankly, I don't know how to solve this one.
But generally, we don't get situations where you need to go that far. In practical situations, I would just assume that the complexity is logarithmic because the value of i is increasing exponentially.

Time Complexity of the C program

What will be the time complexity of the following function?
is it O(n^3) or O(n^4)?
i am getting O(n^3)
in the first for loop, it will undergo n times.
in the second forloop, for every nth element it will go n^2 times, therefore the total complexity till here is O(n^3)
now, the if statement will only hold true value only for n out of n^2 values, and for every n values the k- for loop will go till n^2 elements and hence the complexity is O(n^3).
I have taken few values of n:
for n=3 ,c=25
for n=10,c=1705
for n=50,c=834275
for(i=1;i<=n;++i)
for(j=1;j<=(i*i);++j)
if((j%i)==0)
for(k=1;k<=j;++k)
c=c+1;
Time complexity of such program is O(n^3) magnitude.

Algorithm for finding if there's a "common number"

Let an array with the size of n. We need to write an algorithm which checks if there's a number which appears at least n/loglogn times.
I've understood that there's a way doing it in O(n*logloglogn) which goes something like this:
Find the median using select algorithm and count how many times it appears. if it appears more than n/loglogn we return true. It takes O(n).
Partition the array according the median. It takes O(n)
Apply the algorithm on both sides of the partition (two n/2 arrays).
If we reached a subarray of size less than n/loglogn, stop and return false.
Questions:
Is this algorithm correct?
The recurrence is: T(n) = 2T(n/2) + O(n) and the base case is T(n/loglogn) = O(1). Now, the largest number of calls in the recurrence-tree is O(logloglogn) and since every call is O(n) then the time complexity is O(n*logloglogn). Is that correct?
The suggested solution works, and the complexity is indeed O(n/logloglog(n)).
Let's say a "pass i" is the running of all recursive calls of depth i. Note that each pass requires O(n) time, since while each call is much less than O(n), there are several calls - and overall, each element is processed once in each "pass".
Now, we need to find the number of passes. This is done by solving the equation:
n/log(log(n)) = n / 2^x
<->
n/log(log(n)) * 2^x = n
And the idea is each call is dividing the array by half until you get to the predefined size of n/log(log(n)).
This problem is indeed solved for x in O(n/log(log(log(n))), as you can see in wolfram alpha, and thus the complexity is indeed O(nlog(log(log(n))))
As for correctness - that's because if an element repeats more than the required - it must be in some subarray with size greater/equals the required size, and by reducing constantly the size of the array, you will arrive to a case at some point where #repeats <= size(array) <= #repeats - at this point, you are going to find this element as the median, and find out it's indeed a "frequent item".
Some other approach, in O(n/log(log(n)) time - but with great constants is suggested by Karp-Papadimitriou-Shanker, and is based on filling a table with "candidates" while processing the array.

How do you calculate big O of an algorithm

I have a problem where i have to find missing numbers within an array and add them to a set.
The question goes like so:
Array of size (n-m) with numbers from 1..n with m of them missing.
Find one all of the missing numbers in O(log). Array is sorted.
Example:
n = 8
arr = [1,2,4,5,6,8]
m=2
Result has to be a set {3, 7}.
This is my solution so far and wanted to know how i can calculate the big o of a solution. Also most solution I have seen uses the divide and conquer approach. How do i calculate the big oh of my algorithm below ?
ps If i don't meet the requirement, Is there any way I can do this without having to do it recursively ? I am really not a fan of recursion, I simply cant get my head around it ! :(
var arr = [1,2,4,5,6,8];
var mySet = [];
findMissingNumbers(arr);
function findMissingNumbers(arr){
var temp = 0;
for (number in arr){ //O(n)
temp = parseInt(number)+1;
if(arr[temp] - arr[number] > 1){
addToSet(arr[number], arr[temp]);
}
}
}
function addToSet(min, max){
while (min != max-1){
mySet.push(++min);
}
}
There are two things you want to look at, one you have pointed out: how many times do you iterate the loop "for (number in arr)"? If you array contains n-m elements, then this loop should be iterated n-m times. Then look at each operation you do inside the loop and try to figure out a worst-case scenario (or typical) scenario for each. The temp=... line should be a constant cost (say 1 unit per loop), the conditional is constant cost (say 1 unit per loop) and then there is the addToSet. The addToset is more difficult to analyze because it isn't called every time, and it may vary in how expensive it is each time called. So perhaps what you want to think is that for each of the m missing elements, the addToSet is going to perform 1 operation... a total of m operations (which you don't know when they will occur, but all m must occur at some point). Then add up all of your costs.
n-m loops iterations, in each one you do 2 operations total of 2(n-m) then add in the m operations done by addToSet, for a total of something like 2n-m ~ 2n (assuming that m is small compared to n). This could be O(n-m) or also O(n) (If it is O(n-m) it is also O(n) since n >= n-m.) Hope this helps.
In your code you have a complexity of O(n) in time because you check n index of your array. A faster way to do this is something like that :
Go to the half of your array
Is this number at the right place (this
means the other ones will be too because array is sorted)
If it's the expected number : go to the half of the second half
If not : add this number in the set and go to the half of the first half
Stop when the number you looking at is at index size-1
Note that you can have some optimization, for example you can directly check if the array have the correct size and return an empty array. It depends of your problem.
My algorithm is also in O(n) because you always take the worst set of data. In my case I would be that we miss one data at the end of the array. So technically it should be O(n-1) but constants are negligible in front of n (assumed to be very high). That's why you have to keep in mind the average complexity too.
For what it's worth here is a more succinct implementation of the algorithm (javascript):
var N = 10;
var arr = [2,9];
var mySet = [];
var index = 0;
for(var i=1;i<=N;i++){
if(i!=arr[index]){
mySet.push(i);
}else{
index++;
}
}
Here the big(O) is trivial as there is only a single loop which runs exactly N times with constant cost operations each iteration.
Big O is the complexity of the algorithm. It is a function for the number of steps it takes your program to come up with a solution.
This gives a pretty good explanation of how it works:
Big O, how do you calculate/approximate it?

Resources