Let an array with the size of n. We need to write an algorithm which checks if there's a number which appears at least n/loglogn times.
I've understood that there's a way doing it in O(n*logloglogn) which goes something like this:
Find the median using select algorithm and count how many times it appears. if it appears more than n/loglogn we return true. It takes O(n).
Partition the array according the median. It takes O(n)
Apply the algorithm on both sides of the partition (two n/2 arrays).
If we reached a subarray of size less than n/loglogn, stop and return false.
Questions:
Is this algorithm correct?
The recurrence is: T(n) = 2T(n/2) + O(n) and the base case is T(n/loglogn) = O(1). Now, the largest number of calls in the recurrence-tree is O(logloglogn) and since every call is O(n) then the time complexity is O(n*logloglogn). Is that correct?
The suggested solution works, and the complexity is indeed O(n/logloglog(n)).
Let's say a "pass i" is the running of all recursive calls of depth i. Note that each pass requires O(n) time, since while each call is much less than O(n), there are several calls - and overall, each element is processed once in each "pass".
Now, we need to find the number of passes. This is done by solving the equation:
n/log(log(n)) = n / 2^x
<->
n/log(log(n)) * 2^x = n
And the idea is each call is dividing the array by half until you get to the predefined size of n/log(log(n)).
This problem is indeed solved for x in O(n/log(log(log(n))), as you can see in wolfram alpha, and thus the complexity is indeed O(nlog(log(log(n))))
As for correctness - that's because if an element repeats more than the required - it must be in some subarray with size greater/equals the required size, and by reducing constantly the size of the array, you will arrive to a case at some point where #repeats <= size(array) <= #repeats - at this point, you are going to find this element as the median, and find out it's indeed a "frequent item".
Some other approach, in O(n/log(log(n)) time - but with great constants is suggested by Karp-Papadimitriou-Shanker, and is based on filling a table with "candidates" while processing the array.
Related
i was wondering what would be the time complexity of this piece of code?
last = 0
ans = 0
array = [1,2,3,3,3,4,5,6]
for number in array:
if number != last then: ans++;
last = number
return ans
im thinking O(n^2) as we look at all the array elements twice, once in executing the for loop and then another time when comparing the two subsequent values, but I am not sure if my guess is correct.
While processing each array element, you just make one comparison, based on which you update ans and last. The complexity of the algorithm stands at O(n), and not O(n^2).
The answer is actually O(1) for this case, and I will explain why after explaining why a similar algorithm would be O(n) and not O(n^2).
Take a look at the following example:
def do_something(array):
for number in array:
if number != last then: ans++;
last = number
return ans
We go through each item in the array once, and do two operations with it.
The rule for time complexity is you take the largest component and remove a factor.
if we actually wanted to calculate the exact number of operaitons, you might try something like:
for number in array:
if number != last then: ans++; # n times
last = number # n times
return ans # 1 time
# total number of instructions = 2 * n + 1
Now, Python is a high level language so some of these operations are actually multiple operations put together, so that instruction count is not accurate. Instead, when discussing complexity we just take the largest contributing term (2 * n) and remove the coefficient to get (n). big-O is used when discussing worst case, so we call this O(n).
I think your confused because the algorithm you provided looks at two numbers at a time. the distinction you need to understand is that your code only "looks at 2 numbers at a time, once for each item in the array". It does not look at every possible pair of numbers in the array. Even if your code looked at half of the number of possible pairs, this would still be O(n^2) because the 1/2 term would be excluded.
Consider this code that does, here is an example of an O(n^2) algorithm.
for n1 in array:
for n2 in array:
print(n1 + n2)
In this example, we are looking at each pair of numbers. How many pairs are there? There are n^2 pairs of numbers. Contrast this with your question, we look at each number individually, and compare with last. How many pairs of number and last are there? At worst, 2 * n, which we call O(n).
I hope this clears up why this would be O(n) and not O(n^2). However, as I said at the beginning of my answer this is actually O(1). This is because the length of the array is specifically 8, and not some arbitrary length n. Every time you execute this code it will take the same amount of time, it doesn't vary with anything and so there is no n. n in my example was the length of the array, but there is no such length term provided in your example.
The question:
4-2 Parameter-passing costs
Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N-element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself.
This problem examines the implications of three parameter-passing strategies:
An array is passed by pointer. Time Theta(1)
2. An array is passed by copying. Time Theta(N), where N is the size of the array.
An array is passed by copying only the subrange that might be accessed by the called procedure. Time Theta(q-p+1) if the subarray A[p..q] is passed.
a. Consider the recursive binary search algorithm for finding a number in a sorted array (see Exercise 2.3-5 ). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let N be the size of the original problem and n be the size of a subproblem.
b. Redo part (a) for the MERGE-SORT algorithm from Section 2.3.1.
I have trouble understanding how to solve the recurrence of case 2 where arrays are passed by copying for both algorithms.
Take the binary search algorithm of case 2 for example, the recurrence most answers give is T(n)=T(n / 2)+Theta(N). I have no trouble about that.
Here is an answer I find online that looks correct:
T(n)
= T(n/2) + cN
= 2cN + T(n/4)
= 3cN + T(n/8)
= Sigma(i = 0 to lgn - 1) (2^i cN / (2^i))
= cNlgn
= Theta(nlgn)
I have trouble understanding how the second last line can result in the last line's answer. Why not represent it in Theta(Nlgn)? And how can N become n in the Theta notation?
N and n feel somewhat connected. I have trouble understanding their connection and how that is applied in the solution.
Seems that N represents full array length and n is current chunk size.
But formulas really exploit initial value only, when you start from full length n=N - for example, look at T(n/4) for T(N/4), so n===N everywhere.
In this case you can replace n with N.
My answer will not be very theoretical, but maybe this "more empirical" approach will help you figure it out. Also check Master Theorem (analysis of algorithms) for easier analysis of recursive algorithms complexity
Let's solve the binary search first:
By pointer
O(logN) - acts like an iterative binary search, there will be logN calls each having O(1) complexity
Copying the whole array
O(NlogN) - since for each of the logN calls of the function we copy N elements
Copying only the accessed subarray
O(N) - this one is not that obvious, but can be easily seen that the copied subarrays will be of length, N, N/2, N/4, N/8 .... and summing all this terms will be 2*N
Now for the merge sort algorithm:
O(NlogN) - the same method as for a3, the iterated subarrays will be of lengths N, (N/2, N/2), (N/4, N/4, N/4, N/4) ...
O(N^2) - we make 2*N calls of the sorting function and each has a complexity of O(N) for copying the whole array
O(NlogN) - we will copy only the subarrays that we will iterate over, so the complexity will be as in b1
Let an algorithm which get unsorted array with the size of n. Let a number k<=n. The algorithm prints the k-smallest numbers from 1 to k (ascending). What is the lower bound for the algorithm (for every k)?
Omega(n)
Omega(k*logn)
Omega(n*logk)
Omega(n*logn)
#1,#2 Are both correct.
Now, from my understanding, if we want to find a lower-bound to an algorithm we need to look at the worst-case. If that the case, then obviously the worst-case is when k=n. We know that sorting an array is bounded by Omega(nlogn) so the right answer is #4.
Unfortunately, I am wrong and the right answer is #5.
Why?
It can be done in O(n + klogk).
Run selection algorithm to find the k smallest element - O(n)
Iterate and return the elements lower/equals k - O(n)
Another iteration might be needed in case of the array allows
duplicates, but it is still done in O(n)
Lastly, you need to sort these elements in O(klogk)
It is easy to see this solution is optimal - cannot get better than O(klogk) factor because otherwise for assigning k=n you could sort any array better, and a linear scan at least is a must to find the required elements to be printed.
Lets try with Linear time:
In order to find the k'th smallest element, we have to use "Randomized-Select" which has the average running time of O(n). And use that element as pivot for the quick sort.
Use Quick sort method to split the array[i] <= k and array[i]>k. This would take O(n) time
Take the unsorted left array[i]<=k (which has k elements) and do counting sort, which will obviously take O(k+K)
Finally the print operation will take O(k)
Total time = O(n)+O(k+K)+O(k) = O(n+k+K)
Here, k is the number of elements which are smaller or equal to K
I've been self teaching myself data structures in python and don't know if I'm overthinking (or underthinking!) the following question:
My goal is come up with an efficient algorithm
With the algorithm, my goal is to determine whether an integer i exists such that A[i] = i in an array of increasing integers
I then want to find the the running time in big-O notation as a function of n the length of A?
so wouldn't this just be a slightly modified version of O(log n) with a function equivalent to: f(i) = A[i] - i. Am I reading this problem wrong? Any help would be greatly appreciated!
Note 1: because you say the integers are increasing, you have ruled out that there are duplicates in the array (otherwise you would say monotonically increasing). So a quick check that would rule out whether there is no solution is if the first element is larger than 1. In other words, for there to be any chance of a solution, first element has to be <= 1.
Note 2: similar to Note 1, if last element is < length of array, then there is no solution.
In general, I think the best you can do is binary search. You trap it between low and high indices, and then check the middle index between low and high. If array[middle] equals middle, return yes. If it is less than middle, then set left to middle+1. Otherwise, set right to middle - 1. If left becomes > right, return no.
Running time is O( log n ).
Edit: algorithm does NOT work if you allow monotonically increasing. Exercise: explain why. :-)
You're correct. Finding an element i in your A sized array is O(Log A) indeed.
However, you can do much better: O(Log A) -> O(1) if you trade memory complexity for time complexity, which is what "optimizers" tend to do.
What I mean is: If you insert new Array elements into an "efficient" hash table you can achieve the find function in constant time O(1)
This is depends a lot on the elements you're inserting:
Are they unique? Think of collisions
How often do you insert?
This is an interesting problem :-)
You can use bisection to locate the place where a[i] == i:
0 1 2 3 4 5 6
a = [-10 -5 2 5 12 20 100]
When i = 3, i < a[i], so bisect down
When i = 1 i > a[i], so bisect up
When i = 2 i == a[i], you found the match
The running time is O(log n).
I know this can be done by sorting the array and taking the larger numbers until the required condition is met. That would take at least nlog(n) sorting time.
Is there any improvement over nlog(n).
We can assume all numbers are positive.
Here is an algorithm that is O(n + size(smallest subset) * log(n)). If the smallest subset is much smaller than the array, this will be O(n).
Read http://en.wikipedia.org/wiki/Heap_%28data_structure%29 if my description of the algorithm is unclear (it is light on details, but the details are all there).
Turn the array into a heap arranged such that the biggest element is available in time O(n).
Repeatedly extract the biggest element from the heap until their sum is large enough. This takes O(size(smallest subset) * log(n)).
This is almost certainly the answer they were hoping for, though not getting it shouldn't be a deal breaker.
Edit: Here is another variant that is often faster, but can be slower.
Walk through elements, until the sum of the first few exceeds S. Store current_sum.
Copy those elements into an array.
Heapify that array such that the minimum is easy to find, remember the minimum.
For each remaining element in the main array:
if min(in our heap) < element:
insert element into heap
increase current_sum by element
while S + min(in our heap) < current_sum:
current_sum -= min(in our heap)
remove min from heap
If we get to reject most of the array without manipulating our heap, this can be up to twice as fast as the previous solution. But it is also possible to be slower, such as when the last element in the array happens to be bigger than S.
Assuming the numbers are integers, you can improve upon the usual n lg(n) complexity of sorting because in this case we have the extra information that the values are between 0 and S (for our purposes, integers larger than S are the same as S).
Because the range of values is finite, you can use a non-comparative sorting algorithm such as Pigeonhole Sort or Radix Sort to go below n lg(n).
Note that these methods are dependent on some function of S, so if S gets large enough (and n stays small enough) you may be better off reverting to a comparative sort.
Here is an O(n) expected time solution to the problem. It's somewhat like Moron's idea but we don't throw out the work that our selection algorithm did in each step, and we start trying from an item potentially in the middle rather than using the repeated doubling approach.
Alternatively, It's really just quickselect with a little additional book keeping for the remaining sum.
First, it's clear that if you had the elements in sorted order, you could just pick the largest items first until you exceed the desired sum. Our solution is going to be like that, except we'll try as hard as we can to not to discover ordering information, because sorting is slow.
You want to be able to determine if a given value is the cut off. If we include that value and everything greater than it, we meet or exceed S, but when we remove it, then we are below S, then we are golden.
Here is the psuedo code, I didn't test it for edge cases, but this gets the idea across.
def Solve(arr, s):
# We could get rid of worse case O(n^2) behavior that basically never happens
# by selecting the median here deterministically, but in practice, the constant
# factor on the algorithm will be much worse.
p = random_element(arr)
left_arr, right_arr = partition(arr, p)
# assume p is in neither left_arr nor right_arr
right_sum = sum(right_arr)
if right_sum + p >= s:
if right_sum < s:
# solved it, p forms the cut off
return len(right_arr) + 1
# took too much, at least we eliminated left_arr and p
return Solve(right_arr, s)
else:
# didn't take enough yet, include all elements from and eliminate right_arr and p
return len(right_arr) + 1 + Solve(left_arr, s - right_sum - p)
One improvement (asymptotically) over Theta(nlogn) you can do is to get an O(n log K) time algorithm, where K is the required minimum number of elements.
Thus if K is constant, or say log n, this is better (asymptotically) than sorting. Of course if K is n^epsilon, then this is not better than Theta(n logn).
The way to do this is to use selection algorithms, which can tell you the ith largest element in O(n) time.
Now do a binary search for K, starting with i=1 (the largest) and doubling i etc at each turn.
You find the ith largest, and find the sum of the i largest elements and check if it is greater than S or not.
This way, you would run O(log K) runs of the selection algorithm (which is O(n)) for a total running time of O(n log K).
eliminate numbers < S, if you find some number ==S, then solved
pigeon-hole sort the numbers < S
Sum elements highest to lowest in the sorted order till you exceed S.