What is the Time Complexity Of the following code ? I am Confused - arrays

array = [1,2,4,5]
sum = 3
def arrfilter2(arr):
for i in arr:
if ((sum-i) in arr):
return True
return False
print(arrfilter2(array))
enter image description here

If the array is not sorted, (sum - i) in arr takes O(n) (n is the length of the array). In the worst case, the loop scans all of the array. Therefore, the time complexity is O(n^2).
By the way, in the best case, it can be resolved in O(1).
If the array is sorted, you can do it better by the binary search technique instead of "in" operator. In that case, the worst case will be O(n log n).

The time complexity is O(n^2) in the worst case.
In Python, if you search an element using in then it scans through the array linearly to find the element.
To compute the Complexity of an algo, in a naive sense, it is O(n^< no of nested loops>). Here you have two nested so it is O(n^2).
you can make it efficient by sorting it first and then using binary search instead of searching linearly using in

Related

Use the divide-and-conquer approach to write an algorithm that finds the largest item

Use the divide-and-conquer approach to write an algorithm that finds the largest item
in a list of n items. Analyze your algorithm, and show the results in order notation
A divide-and-conquer algorithm for finding the maximum element in an array would split the array into two halves, solve each subproblem separately and return the maximum of the solutions to the two subproblems. The base case of the recursion would be when the subproblem has size 1, in which case the maximum is the element in the array. The recurrence for the running time is $T(n)=2T(n/2)+c$, which has solution $T(n)=\Theta(n)$ by the Master theorem.
This is the same asymptotic running time as (but a constant factor larger than) linear search. Here's the pseudocode:
Function FindMax(A,p,r):
#input: an array A[p..r]. Returns maximum value
if p=r then return A[p] #base case of recursion
else:
q = (p+r)//2 #midpoint
return max{FindMax(A,p,q), FindMax(A,q+1,r)}

Complexity of sorting n/logn sequences of an array size n

Given an array in the size of N (the array contains whole numbers), I wish to sort the array but only on lengths of log(n) in the array, so by the end the array will have n/logn sequences (in the size of logn each sequence) that are sorted.
My idea was to use the algorithm of MergeSort which in worst case of time complexity runs O(nlogn).
But since I am only sorting lengths of logn in the array, the time complexity should be O(log(n)*log(log(n))) because I am not in fact going through the entire length of N.
So MergeSort will be preformed in that case n/logn times.
Is it safe to assume that the overall time complexity of this action would be (n/logn)*O(log(n)log(log(n))) => O(nlog(log(n)))?
Your calculation is correct: sorting n / log n chunks of the array of size log n can be done in O(n log(log n)).
However, if your entire array is not that big in the first place (say a few thousand elements max), the log n chunks will be quite small, in which case it is actually more efficient to use insertion sort rather than an algorithm like merge sort or Quicksort.

Time and Space Complexity of top k frequent elements in an array

There is a small confusion regarding the time and space complexity for the given problem:
Given an array of size N , return a list of top K frequent elements.
Based on the most popular solution:
Use a HashMap of size K with the count of each entry as value.
Build a MaxHeap of size K by traversing the HashMap generated above.
Pop the elements in the MaxHeap into a list and return the list.
K being the number of unique elements in the input.
The space and time complexity is: O(K) and O(K*log(K)
Now the confusion starts here. We know we are dealing with worst case complexity in the above analysis. So the worst value K can take is N, when all the elements in array are unique.
Hence K <= N. Thereby O(K) be represented as O(N) ??
Thereby, shouldn't the space and time complexity be O(N) and O(N*log(N)) for the above problem?
I know this is a technicality, but its been bothering me for a while. Please advise.
Yes, you are right since K<N, the time complexity for the hashmap part should be O(N).
But heap only have K elements in it and has the time complexity of O(Klog(K)) which if considered asymptotically is far larger than linear complexity of O(N) and hence results in final time complexity of O(Klog(K)).

What is the lowest bound for the algorithm?

Let an algorithm which get unsorted array with the size of n. Let a number k<=n. The algorithm prints the k-smallest numbers from 1 to k (ascending). What is the lower bound for the algorithm (for every k)?
Omega(n)
Omega(k*logn)
Omega(n*logk)
Omega(n*logn)
#1,#2 Are both correct.
Now, from my understanding, if we want to find a lower-bound to an algorithm we need to look at the worst-case. If that the case, then obviously the worst-case is when k=n. We know that sorting an array is bounded by Omega(nlogn) so the right answer is #4.
Unfortunately, I am wrong and the right answer is #5.
Why?
It can be done in O(n + klogk).
Run selection algorithm to find the k smallest element - O(n)
Iterate and return the elements lower/equals k - O(n)
Another iteration might be needed in case of the array allows
duplicates, but it is still done in O(n)
Lastly, you need to sort these elements in O(klogk)
It is easy to see this solution is optimal - cannot get better than O(klogk) factor because otherwise for assigning k=n you could sort any array better, and a linear scan at least is a must to find the required elements to be printed.
Lets try with Linear time:
In order to find the k'th smallest element, we have to use "Randomized-Select" which has the average running time of O(n). And use that element as pivot for the quick sort.
Use Quick sort method to split the array[i] <= k and array[i]>k. This would take O(n) time
Take the unsorted left array[i]<=k (which has k elements) and do counting sort, which will obviously take O(k+K)
Finally the print operation will take O(k)
Total time = O(n)+O(k+K)+O(k) = O(n+k+K)
Here, k is the number of elements which are smaller or equal to K

Sorting an array of integers using algorithm with complexity O(n)

I red already that the best sort comparison algorithms have complexity O(nlog(n)). But I'm asked to sort an array of integers(in C) using a sorting algorithm of complexity O(n) given that all the elements in the array are non-negative and less than a constant K. But I have no idea how to use this information in the sorting algorithm. You guys have any idea?
That's a simple one (known as "counting sort" or "histogram sort", a degenerate case of "bucket sort"):
Allocate an array with one slot for each non-negative integer less than k, and zero it. O(k)
Iterate over all elements of the input and count them in our array. O(n)
Iterate over our array and write the elements out in-order. O(n+k)
Thus, O(n+k).
Radix sort gives you O(n log k), not O(n log n) complexity. Since K is a fixed number independent of n, the resultant complexity is O(n * const), I.e. It is linear.
create a new array of size K and just insert each element to the array in it own position..
lets say K=100
create an array of 100 integers - clear it.
and if you have the set {55, 2, 7, 34}.
you just need to the following:
array[55] = 1;
array[2] = 1;
array[7] = 1;
array[34] =1;
And the go over the array from start to end and just print the index of the cells that are == 1
Depends on the kind of complexity. Average case O(n+k): Bucket Sort.
Radix sort should be O(m * n) though. (m being the length of the key used to sort)

Resources