Minimum value of maximum values in sub-segments ... in O(n) complexity - arrays

I interviewed with Amazon a few days ago. I could not answer one of the questions the asked me to their satisfaction. I have tried to get the answer after the interview but I have not been successful so far. Here is the question:
You have an array of integers of size n. You are given parameter k where k < n. For each segment of consecutive elements of size k in the array you need to calculate the maximum value. You only need to return the minimum value of these maximum values.
For instance given 1 2 3 1 1 2 1 1 1 and k = 3 the answer is 1.
The segments would be 1 2 3, 2 3 1, 3 1 1, 1 1 2, 1 2 1, 2 1 1, 1 1 1.
The maximum values in each segment are 3, 3, 3, 2, 2, 2, 1.
The minimum of these values are 1 thus the answer is 1.
The best answer I came up with is of complexity O(n log k). What I do is to create a binary search tree with the first k elements, get the maximum value in the tree and save it in variable minOfMax, then loop one element at a time with the remaining elements in the array, remove the first element in the previous segment from the binary search tree, insert the last element of the new segment in the tree, get the maximum element in the tree and compare it with minOfMax leaving in minOfMax the min value of the two.
The ideal answer needs to be of complexity O(n).
Thank you.

There is a very clever way to do this that's related to this earlier question. The idea is that it's possible to build a queue data structure that supports enqueue, dequeue, and find-max in amortized O(1) time (there are many ways to do this; two are explained in the original question). Once you have this data structure, begin by adding the first k elements from the array into the queue in O(k) time. Since the queue supports O(1) find-max, you can find the maximum of these k elements in O(1) time. Then, continuously dequeue an element from the queue and enqueue (in O(1) time) the next array element. You can then query in O(1) what the maximum of each of these k-element subarrays are. If you track the minimum of these values that you see over the course of the array, then you have an O(n)-time, O(k)-space algorithm for finding the minimum maximum of the k-element subarrays.
Hope this helps!

#templatetypedef's answer works, but I think I have a more direct approach.
Start by computing the max for the following (closed) intervals:
[k-1, k-1]
[k-2, k-1]
[k-3, k-1]
...
[0, k-1]
Note that each of these can be computed in constant time from the preceeding one.
Next, compute the max for these intervals:
[k, k]
[k, k+1]
[k, k+2]
...
[k, 2k-1]
Now these intervals:
[2k-1, 2k-1]
[2k-2, 2k-1]
[2k-3, 2k-1]
...
[k+1, 2k-1]
Next you do the intervals from 2k to 3k-1 ("forwards intervals"), then from 3k-1 down to 2k+1 ("backwards intervals"). And so on until you reach the end of the array.
Put all of these into a big table. Note that each entry in this table took constant time to compute. Observe that there are at most 2*n intervals in the table (because each element appears once on the right side of a "forwards interval" and once on the left side of a "backwards interval").
Now, if [a,b] is any interval of width k, it must contain exactly one of 0, k, 2k, ...
Say it contains m*k.
Observe that the intervals [a, m*k-1] and [m*k ,b] are both somewhere in our table. So we can simply look up the max for each, and the max of those two values is the max of the interval [a,b].
So for any interval of width k, we can use our table to get its maximum in constant time. We can generate the table in O(n) time. Result follows.

I implemented (and commented) templatetypedef's answer in C#.
n is array length, k is window size.
public static void printKMax(int[] arr, int n, int k)
{
Deque<int> qi = new Deque<int>();
int i;
for (i=0 ; i < k ; i++) // The first window of the array
{
while ((qi.Count > 0) && (arr[i] >= arr[qi.PeekBack()]))
{
qi.PopBack();
}
qi.PushBack(i);
}
for(i=k ; i < n ; ++i)
{
Console.WriteLine(arr[qi.PeekFront()]); // the first item is the largest element in previous window. The second item is its index.
while (qi.Count > 0 && qi.PeekFront() <= i - k)
{
qi.PopFront(); //When it's out of its window k
}
while (qi.Count>0 && arr[i] >= arr[qi.PeekBack()])
{
qi.PopBack();
}
qi.PushBack(i);
}
Console.WriteLine(arr[qi.PeekFront()]);
}

Related

How to find the rightmost number in the array which is greater or equal to current one in O(N) time?

given an array nums of integers with length n, for each index i, I am trying to find the rightmost index j such that i < j and nums[j] >= nums[i]. Is there an O(N) solution for this problem? I am aware of monotonic stack which could be used for this kind of problems, but unable to derive an algorithm.
For example, given an array A:
A = [9,8,1,0,1,9,4,0,4,1], the solution should output
[5,5,9,9,9,-1,8,9,-1,-1]. Here -1 indicates no indices satisfy the constraint.
This link asked the same question, and the accepted answer is only for O(NlogN). I'd like to know whether an O(N) solution is possible.
Thank you.
Update
Based on #Aivean's answer, here is an O(Nlog(N)) solution in python.
def rightmostGreaterOrEqual(nums):
A, n = nums, len(nums)
indx = [-1]*n
stack, stackv = [], []
for i in range(n-1, -1, -1):
if not stack or nums[stack[-1]] < nums[i]:
stack.append(i)
stackv.append(nums[i])
else:
idx = bisect.bisect_left(stackv, nums[i])
indx[i] = stack[idx]
return indx
B = [9,8,1,0,1,9,4,0,4,1]
rightGreat = rightmostGreaterOrEqual(B)
print(B)
[9, 8, 1, 0, 1, 9, 4, 0, 4, 1]
print(rightGreat)
[5, 5, 9, 9, 9, -1, 8, 9, -1, -1]
There is not going to be an O(N) algorithm for the problem as written. Given a function that solves this problem, you could use it to partition N/2 arbitrary numbers into N/2 arbitrary adjacent ranges.
For example [2532,1463,3264,200,4000,3000,2000,1000] produces [5,6,4,7,-1,-1,-1,-1], identifying the ranges of the first N/2 numbers.
If you can only relate the numbers by comparison, then this will take you N/2 * log(N/2) comparisons, so O(N log N) time.
Without a limit on the size of the numbers, which would let you cheat like a radix sort, there isn't going to be way that is asymptotically faster than all comparison-based methods.
The two problems of finding leftmost and rightmost j for a given i are not symmetrical, because of the added constraint of i < j. When I'm talking about these two tasks I assume that the constraint i < j is not flipped. This constraint means, that we always look to the right of i when searching for j, whether we're looking for rightmost or leftmost j. Without this constraint two tasks would be symmetrical.
1. Finding rightmost j, such that i < j and nums[i] ≤ nums[j]
One way to solve this task, is to traverse nums from right to left and maintain the strictly increasing subsequence of already visited elements (with their indices). Current element is added to the sequence only if it's larger, than the largest element already present in the sequence. Adding new element into the sequence is O(1).
For each element of nums you have to perform binary search in the subsequence of the visited elements to find the value that is larger or equals than the current element. Binary search is O(log n).
The total time is O(n log n), the auxiliary space needed is O(n).
Here is the graphical representation of the problem:
Here yellow dots represent the elements that form strictly increasing sequence (and their answer will be -1). Every other element (in blue) picks one of the ranges formed by yellow elements.
2. Finding leftmost j, such that i < j and nums[i] ≤ nums[j]
This problem, as opposed to the previous one, can be solved in O(n) time and O(n) space using monotonic stack. Similar to the previous problem, as you traverse nums from right to left, you form and maintain a monotonic stack, but, importantly, when new element is added to the stack all elements that are smaller are removed from the stack. And instead of using binary search to find the larger element, the answer is right at the new top of the stack (after all smaller elements were removed). This makes updating the stack and finding the answer for each element amortized O(1).
Here yellow elements represent elements with answer = -1, when they were added to the stack, they emptied the stack completely, as they were larger than every other element in the stack.

Find the element occuring once in an array where all other elements occur twice (without using XOR)

I have tried solving this for so long but I can't seem to be able to.
The question is as follows:
Given an array n numbers where all of the numbers in it occur twice except for one, which occurs only once, find the number that occurs only once.
Now, I have found many solutions online for this, but none of them satisfy the additional constraints of the question.
The solution should:
Run in linear time (aka O(n)).
Not use hash tables.
Assume that computer supports only comparison and the arithmetic (addition, subtraction, multiplication, division).
The number of bits in each number in the array is about O(log(n)).
Therefore, trying something like this https://stackoverflow.com/a/4772568/7774315 using the XOR operator isn't possible, since we don't have the XOR operator. Since the number of bits in each number is about O(log(n)), trying to implement the XOR operator using normal arithmetic (bit by bit) will take about O(log(n)) actions, which will give us an overall solution of O(nlog(n)).
The closest I have come to solving it is if I had a way to get the sum of all unique values in the array in linear time, I could subtract twice that sum from the overall sum to get (negative) the element that occurs only once, because if the numbers that appear twice are {a1,a2,....,ak} and the number that appears once is x, then the overall sum is
sum=2(a1+...+ak)+x
As far as I know, sets are implemented using hash tables, so using them to find the sum of all unique values is no good.
Let's imagine we had a way to find the exact median in linear time and partition the array so all greater elements are on one side and smaller elements on the other. By the parity of expected number of elements, we could identify which side the target element is in. Now perform this routine recursively in the section we identified. Since the section is halved in size each time, the total number of elements traversed cannot exceed O(2n) = O(n).
The key element in the question seems to be this one:
The number of bits in each number in the array is about O(log(n)).
The issue is that this clue is vague a little bit.
A first approach is to consider that the maximum value is O(n). Then a counting sort can be performed in O(n) operations and O(n) memory.
It will consists in finding the maximum value MAX, setting an integer array C[MAX] and performing directly a classical counting sort thanks to it
C[a[i]]++;
Looking for an odd value in array C[] will provide the solution.
A second approach, I guess more efficient, would be to set an array of size n, each element consisting of an array of unknown size. Then, a kind of almost counting sort would consists in :
C[a[i]%n].append (a[i]);
To find the unique element, we then have to find a sub-array of odd size, and then to examine the elements in this sub-array.
The maximum size k of each sub-array will be about 2*(MAX/n). According to the clue, this value should be very low. Dealing with this sub-array has a complexity O(k), for example by performing a counting sort on the b[j]/n, all the elements being equal modulo n.
We can note that practically, this is equivalent to perform a kind of ad-hoc hashing.
Global complexity is O(n + MAX/n).
This should do the trick as long as your a dealing with integers of size O(log n). It is a Python implementation of the algorithm sketched #גלעד ברקן answer (including #OneLyner comments), where the median is replaced by a mean or mid-value.
def mean(items):
result = 0
for i, item in enumerate(items, 1):
result = (result * (i - 1) + item) / i
return result
def midval(items):
min_val = max_val = items[0]
for item in items:
if item < min_val:
min_val = item
elif item > max_val:
max_val = item
return (max_val - min_val) / 2
def find_singleton(items, pivoting=mean):
n = len(items)
if n == 1:
return items[0]
else:
# find pivot - O(n)
pivot = pivoting(items)
# partition the items - O(n)
j = 0
for i, item in enumerate(items):
if item > pivot:
items[j], items[i] = items[i], items[j]
j += 1
# recursion on the partition with odd number of elements
if j % 2:
return find_singleton(items[:j])
else:
return find_singleton(items[j:])
The following code is just for some sanity-checking on random inputs:
def gen_input(n, randomize=True):
"""Generate inputs with unique pairs except one, with size (2 * n + 1)."""
items = sorted(set(random.randint(-n, n) for _ in range(n)))[:n]
singleton = items[-1]
items = items + items[:-1]
if randomize:
random.shuffle(items)
return items, singleton
items, singleton = gen_input(100)
print(singleton, len(items), items.index(singleton), items)
print(find_singleton(items, mean))
print(find_singleton(items, midval))
For a symmetric distribution the median and the mean or mid-value coincide.
With the log(n) requirement on the number of bits for the entries, one
can show that any arbitrary sub-sampling cannot be skewed enough to provide more than log(n) recursions.
For example, considering the case of k = log(n) bits with k = 4 and only positive numbers, the worst case is: [0, 1, 1, 2, 2, 4, 4, 8, 8, 16, 16]. Here pivoting by the mean will reduce the input by 2 at time, resulting in k + 1 recursive calls, but adding any other couple to the input will not increase the number of recursive calls, while it will increase the input size.
(EDITED to provide a better explanation.)
Here is an (unoptimized) implementation of the idea sketched by גלעד ברקן .
I'm using Median_of_medians to get a value close enough to the median to ensure the linear time in the worst case.
NB: this in fact uses only comparisons, and is O(n) whatever the size of the integers as long as comparisons and copies are counted as O(1).
def median_small(L):
return sorted(L)[len(L)//2]
def median_of_medians(L):
if len(L) < 20:
return median_small(L)
return median_of_medians([median_small(L[i:i+5]) for i in range(0, len(L), 5)])
def find_single(L):
if len(L) == 1:
return L[0]
pivot = median_of_medians(L)
smaller = [i for i in L if i <= pivot]
bigger = [i for i in L if i > pivot]
if len(smaller) % 2:
return find_single(smaller)
else:
return find_single(bigger)
This version needs O(n) additional space, but could be implemented with O(1).

Find the longest subarray that contains a majority element

I am trying to solve this algorithmic problem:
https://dunjudge.me/analysis/problems/469/
For convenience, I have summarized the problem statement below.
Given an array of length (<= 2,000,000) containing integers in the range [0, 1,000,000], find the
longest subarray that contains a majority element.
A majority element is defined as an element that occurs > floor(n/2) times in a list of length n.
Time limit: 1.5s
For example:
If the given array is [1, 2, 1, 2, 3, 2],
The answer is 5 because the subarray [2, 1, 2, 3, 2] of length 5 from position 1 to 5 (0-indexed) has the number 2 which appears 3 > floor(5/2) times. Note that we cannot take the entire array because 3 = floor(6/2).
My attempt:
The first thing that comes to mind is an obvious brute force (but correct) solution which fixes the start and end indexes of a subarray and loop through it to check if it contains a majority element. Then we take the length of the longest subarray that contains a majority element. This works in O(n^2) with a small optimization. Clearly, this will not pass the time limit.
I was also thinking of dividing the elements into buckets that contain their indexes in sorted order.
Using the example above, these buckets would be:
1: 0, 2
2: 1, 3, 5
3: 4
Then for each bucket, I would make an attempt to merge the indexes together to find the longest subarray that contains k as the majority element where k is the integer label of that bucket.
We could then take the maximum length over all values of k. I didn't try out this solution as I didn't know how to perform the merging step.
Could someone please advise me on a better approach to solve this problem?
Edit:
I solved this problem thanks to the answers of PhamTrung and hk6279. Although I accepted the answer from PhamTrung because he first suggested the idea, I highly recommend looking at the answer by hk6279 because his answer elaborates the idea of PhamTrung and is much more detailed (and also comes with a nice formal proof!).
Note: attempt 1 is wrong as #hk6279 has given a counter example. Thanks for pointing it out.
Attempt 1:
The answer is quite complex, so I will discuss a brief idea
Let process each unique number one by one.
Processing each occurrence of number x from left to right, at index i, let add an segment (i, i) indicates the start and end of the current subarray. After that, we need to look to the left side of this segment, and try to merge the left neighbour of this segment into (i, i), (So, if the left is (st, ed), we try to make it become (st, i) if it satisfy the condition) if possible, and continue to merge them until we are not able to merge, or there is no left neighbour.
We keep all those segments in a stack for faster look up/add/remove.
Finally, for each segment, we try to enlarge them as large as possible, and keep the biggest result.
Time complexity should be O(n) as each element could only be merged once.
Attempt 2:
Let process each unique number one by one
For each unique number x, we maintain an array of counter. From 0 to end of the array, if we encounter a value x we increase the count, and if we don't we decrease, so for this array
[0,1,2,0,0,3,4,5,0,0] and number 0, we have this array counter
[1,0,-1,0,1,0,-1,-2,-1,0]
So, in order to make a valid subarray which ends at a specific index i, the value of counter[i] - counter[start - 1] must be greater than 0 (This can be easily explained if you view the array as making from 1 and -1 entries; with 1 is when there is an occurrence of x, -1 otherwise; and the problem can be converted into finding the subarray with sum is positive)
So, with the help of a binary search, the above algo still have an complexity of O(n ^ 2 log n) (in case we have n/2 unique numbers, we need to do the above process n/2 times, each time take O (n log n))
To improve it, we make an observation that, we actually don't need to store all values for all counter, but just the values of counter of x, we saw that we can store for above array counter:
[1,#,#,0,1,#,#,#,-1,0]
This will leads to O (n log n) solution, which only go through each element once.
This elaborate and explain how attempt 2 in #PhamTrung solution is working
To get the length of longest subarray. We should
Find the max. number of majority element in a valid array, denote as m
This is done by attempt 2 in #PhamTrung solution
Return min( 2*m-1, length of given array)
Concept
The attempt is stem from a method to solve longest positive subarray
We maintain an array of counter for each unique number x. We do a +1 when we encounter x. Otherwise, do a -1.
Take array [0,1,2,0,0,3,4,5,0,0,1,0] and unique number 0, we have array counter [1,0,-1,0,1,0,-1,-2,-1,0,-1,0]. If we blind those are not target unique number, we get [1,#,#,0,1,#,#,#,-1,0,#,0].
We can get valid array from the blinded counter array when there exist two counter such that the value of the right counter is greater than or equal to the left one. See Proof part.
To further improve it, we can ignore all # as they are useless and we get [1(0),0(3),1(4),-1(8),0(9),0(11)] in count(index) format.
We can further improve this by not record counter that is greater than its previous effective counter. Take counter of index 8,9 as an example, if you can form subarray with index 9, then you must be able to form subarray with index 8. So, we only need [1(0),0(3),-1(8)] for computation.
You can form valid subarray with current index with all previous index using binary search on counter array by looking for closest value that is less than or equal to current counter value (if found)
Proof
When right counter greater than left counter by r for a particular x, where k,r >=0 , there must be k+r number of x and k number of non x exist after left counter. Thus
The two counter is at index position i and r+2k+i
The subarray form between [i, r+2k+i] has exactly k+r+1 number of x
The subarray length is 2k+r+1
The subarray is valid as (2k+r+1) <= 2 * (k+r+1) -1
Procedure
Let m = 1
Loop the array from left to right
For each index pi
If the number is first encounter,
Create a new counter array [1(pi)]
Create a new index record storing current index value (pi) and counter value (1)
Otherwise, reuse the counter array and index array of the number and perform
Calculate current counter value ci by cprev+2-(pi - pprev), where cprev,pprev are counter value and index value in index record
Perform binary search to find the longest subarray that can be formed with current index position and all previous index position. i.e. Find the closest c, cclosest, in counter array where c<=ci. If not found, jump to step 5
Calculate number of x in the subarray found in step 2
r = ci - cclosest
k = (pi-pclosest-r)/2
number of x = k+r+1
Update counter m by number of x if subarray has number of x > m
Update counter array by append current counter if counter value less than last recorded counter value
Update index record by current index (pi) and counter value (ci)
For completeness, here's an outline of an O(n) theory. Consider the following, where * are characters different from c:
* c * * c * * c c c
i: 0 1 2 3 4 5 6 7 8 9
A plot for adding 1 for c and subtracting 1 for a character other than c could look like:
sum_sequence
0 c c
-1 * * c c
-2 * * c
-3 *
A plot for the minimum of the above sum sequence, seen for c, could look like:
min_sum
0 c * *
-1 * c * *
-2 c c c
Clearly, for each occurrence of c, we are looking for the leftmost occurrence of c with sum_sequence lower than or equal to the current sum_sequence. A non-negative difference would mean c is a majority, and leftmost guarantees the interval is the longest up to our position. (We can extrapolate a maximal length that is bounded by characters other than c from the inner bounds of c as the former can be flexible without affecting the majority.)
Observe that from one occurrence of c to the next, its sum_sequence can decrease by an arbitrary size. However, it can only ever increase by 1 between two consecutive occurrences of c. Rather than each value of min_sum for c, we can record linear segments, marked by cs occurrences. A visual example:
[start_min
\
\
\
\
end_min, start_min
\
\
end_min]
We iterate over occurrences of c and maintain a pointer to the optimal segment of min_sum. Clearly we can derive the next sum_sequence value for c from the previous one since it is exactly diminished by the number of characters in between.
An increase in sum_sequence for c corresponds with a shift of 1 back or no change in the pointer to the optimal min_sum segment. If there is no change in the pointer, we hash the current sum_sequence value as a key to the current pointer value. There can be O(num_occurrences_of_c) such hash keys.
With an arbitrary decrease in c's sum_sequence value, either (1) sum_sequence is lower than the lowest min_sum segment recorded so we add a new, lower segment and update the pointer, or (2) we've seen this exact sum_sequence value before (since all increases are by 1 only) and can use our hash to retrieve the optimal min_sum segment in O(1).
As Matt Timmermans pointed out in the question comments, if we were just to continually update the pointer to the optimal min_sum by iterating over the list, we would still only perform O(1) amortized-time iterations per character occurrence. We see that for each increasing segment of sum_sequence, we can update the pointer in O(1). If we used binary search only for the descents, we would add at most (log k) iterations for every k occurences (this assumes we jump down all the way), which keeps our overall time at O(n).
Algorithm :
Essentially, what Boyer-Moore does is look for a suffix sufsuf of nums where suf[0]suf[0] is the majority element in that suffix. To do this, we maintain a count, which is incremented whenever we see an instance of our current candidate for majority element and decremented whenever we see anything else. Whenever count equals 0, we effectively forget about everything in nums up to the current index and consider the current number as the candidate for majority element. It is not immediately obvious why we can get away with forgetting prefixes of nums - consider the following examples (pipes are inserted to separate runs of nonzero count).
[7, 7, 5, 7, 5, 1 | 5, 7 | 5, 5, 7, 7 | 7, 7, 7, 7]
Here, the 7 at index 0 is selected to be the first candidate for majority element. count will eventually reach 0 after index 5 is processed, so the 5 at index 6 will be the next candidate. In this case, 7 is the true majority element, so by disregarding this prefix, we are ignoring an equal number of majority and minority elements - therefore, 7 will still be the majority element in the suffix formed by throwing away the first prefix.
[7, 7, 5, 7, 5, 1 | 5, 7 | 5, 5, 7, 7 | 5, 5, 5, 5]
Now, the majority element is 5 (we changed the last run of the array from 7s to 5s), but our first candidate is still 7. In this case, our candidate is not the true majority element, but we still cannot discard more majority elements than minority elements (this would imply that count could reach -1 before we reassign candidate, which is obviously false).
Therefore, given that it is impossible (in both cases) to discard more majority elements than minority elements, we are safe in discarding the prefix and attempting to recursively solve the majority element problem for the suffix. Eventually, a suffix will be found for which count does not hit 0, and the majority element of that suffix will necessarily be the same as the majority element of the overall array.
Here's Java Solution :
Time complexity : O(n)
Space complexity : O(1)
public int majorityElement(int[] nums) {
int count = 0;
Integer candidate = null;
for (int num : nums) {
if (count == 0) {
candidate = num;
}
count += (num == candidate) ? 1 : -1;
}
return candidate;
}

max. distance of a number greater than a given number in array

i was going through an interview question ..and came up with logic that requires to find:
Find an index j for an element a[j] larger than a[i] (with j < i), such that (i-j) is the largest. And I want to find this j for every index i in the array, in O(n) or O(n log n) time with O(n) extra space.`
What I have done until now :
1) O(n^2) by using simple for loops
2) Build balanced B.S.T. as we scan the elements from left to right and for i'th element find index of element greater than it. But I realized that it can easily be O(n) for single element, therefore O(n^2) for entire array.
I want to know if it is possible to do it in O(n) or O(n log n). If yes, please give some hints.
EDIT : i think i am unable to explain my question . let me explain it clearly:
i want arr[j] on left of arr[i] such that (i-j) is the largest possible ,and arr[j]>arr[i] and find this for all index i i.e.for(i=0 to n-1).
EDIT 2 :example - {2,3,1,6,0}
for 2 , ans=-1
for 3 , ans=-1
for 1 , ans=2 (i-j)==(2-0)
for 6 , ans=-1
for 0 , ans=4 (i-j)==(4-0)
Create an auxillary array of maximums, let it be maxs, which will basically contain the max value on the array up to the current index.
Formally: maxs[i] = max { arr[0], arr[1], ..., arr[i] }
Note that this is pre processing step that can be done in O(n)
Now for each element i, you are looking for the first element in maxs that is larger then arr[i]. This can be done using binary search, and is O(logn) per op.
Gives you total of O(nlogn) time and O(n) extra space.
You can do this in O(n) time using a stack data structure for array indexes for which you have yet to find a solution. It can be implemented as an array of at most n elements.
Iterate over the input array from left to right, starting with the last element:
Pop all indexes from the stack for which the array element is less than the current element. Mark the index of the current element as the solution for each index you pop.
Push the index of the current element on the stack.
Invariant: the array items corresponding to the indexes in the stack are always in ascending order, with the least item on top.
When you reach the beginning of the input, mark any items that still remain on the stack with -1; for them there is no answer.
Each array index is pushed into the stack exactly once and popped at most once, so this algorithm runs in O(n) time.
An example in Python:
def solution(arr):
stack = []
out = [-1]*len(arr)
for i in xrange(len(arr)-1, -1, -1):
while len(stack) > 0 and arr[stack[-1]] < arr[i]:
out[stack.pop()] = i
stack.append(i);
return out
Note that the correct answer for input [2, 4, 1, 5, 3] is [-1, -1, 1, -1, 3]: for a fixed i, the difference j-i is greatest when j is greatest, so you are looking for the leftmost index j, which minimizes the distance. (When j < i, the difference j-i is negative.)
The fastest solution I can think of is allocating a second array and scanning the array left-to-right. As you traverse the array and scan each element, append the index of the element to your second array if arr[index] is greater than the right-most element of your second array. This is O(1) time per append, maximum of n appends, so O(n).
Finally, once your array is complete, take a second pass through your array. For each element, scan your second array using binary search (this is possible since it is implicitly sorted) and find the leftmost (earliest inserted) index j in your array such that arr[j] > arr[i].
To do this, you have to do a modification of binary search. If you find an index j such that arr[j] > arr[i], you still have to check to see if there are any indices k to the left such that arr[k] > arr[i]. You must do this until you find the left-most index.
I think this is O(log n) per binary search and you have to do the search for n elements. So the total time complexity would be close to O(n log n), but I am not sure of this. Any comments/suggestions to this answer would be much appreciated.
Here's my solution in C++
We maintain an increasing array. Compare the current element with the element at the back of the array.
If it is larger or equals to the larget element so far, then append this element to the array, return -1, there's no smaller element on its left.
If not, we use a binary search, find the index and return the difference.
(We still need to append vec.back() to the array, because we cannot change the index)
int findIdx(vector<int>& vec, int target){
auto it = upper_bound(vec.begin(), vec.end(), target);
int idx = int(it-vec.begin());
return idx;
}
vector<int> farestBig(vector<int>& arr){
vector<int> ans{-1};
vector<int> vec{arr[0]};
int n = (int)arr.size();
for(int i=1; i<n; i++){
if(arr[i] >= vec.back()){
ans.push_back(-1);
vec.push_back(arr[i]);
}
else{
int idx = findIdx(vec, arr[i]);
ans.push_back(i-idx);
vec.push_back(vec.back());
}
}
return ans;
}

How can I find a number which occurs an odd number of times in a SORTED array in O(n) time?

I have a question and I tried to think over it again and again... but got nothing so posting the question here. Maybe I could get some view-point of others, to try and make it work...
The question is: we are given a SORTED array, which consists of a collection of values occurring an EVEN number of times, except one, which occurs ODD number of times. We need to find the solution in log n time.
It is easy to find the solution in O(n) time, but it looks pretty tricky to perform in log n time.
Theorem: Every deterministic algorithm for this problem probes Ω(log2 n) memory locations in the worst case.
Proof (completely rewritten in a more formal style):
Let k > 0 be an odd integer and let n = k2. We describe an adversary that forces (log2 (k + 1))2 = Ω(log2 n) probes.
We call the maximal subsequences of identical elements groups. The adversary's possible inputs consist of k length-k segments x1 x2 … xk. For each segment xj, there exists an integer bj ∈ [0, k] such that xj consists of bj copies of j - 1 followed by k - bj copies of j. Each group overlaps at most two segments, and each segment overlaps at most two groups.
Group boundaries
| | | | |
0 0 1 1 1 2 2 3 3
| | | |
Segment boundaries
Wherever there is an increase of two, we assume a double boundary by convention.
Group boundaries
| || | |
0 0 0 2 2 2 2 3 3
Claim: The location of the jth group boundary (1 ≤ j ≤ k) is uniquely determined by the segment xj.
Proof: It's just after the ((j - 1) k + bj)th memory location, and xj uniquely determines bj. //
We say that the algorithm has observed the jth group boundary in case the results of its probes of xj uniquely determine xj. By convention, the beginning and the end of the input are always observed. It is possible for the algorithm to uniquely determine the location of a group boundary without observing it.
Group boundaries
| X | | |
0 0 ? 1 2 2 3 3 3
| | | |
Segment boundaries
Given only 0 0 ?, the algorithm cannot tell for sure whether ? is a 0 or a 1. In context, however, ? must be a 1, as otherwise there would be three odd groups, and the group boundary at X can be inferred. These inferences could be problematic for the adversary, but it turns out that they can be made only after the group boundary in question is "irrelevant".
Claim: At any given point during the algorithm's execution, consider the set of group boundaries that it has observed. Exactly one consecutive pair is at odd distance, and the odd group lies between them.
Proof: Every other consecutive pair bounds only even groups. //
Define the odd-length subsequence bounded by the special consecutive pair to be the relevant subsequence.
Claim: No group boundary in the interior of the relevant subsequence is uniquely determined. If there is at least one such boundary, then the identity of the odd group is not uniquely determined.
Proof: Without loss of generality, assume that each memory location not in the relevant subsequence has been probed and that each segment contained in the relevant subsequence has exactly one location that has not been probed. Suppose that the jth group boundary (call it B) lies in the interior of the relevant subsequence. By hypothesis, the probes to xj determine B's location up to two consecutive possibilities. We call the one at odd distance from the left observed boundary odd-left and the other odd-right. For both possibilities, we work left to right and fix the location of every remaining interior group boundary so that the group to its left is even. (We can do this because they each have two consecutive possibilities as well.) If B is at odd-left, then the group to its left is the unique odd group. If B is at odd-right, then the last group in the relevant subsequence is the unique odd group. Both are valid inputs, so the algorithm has uniquely determined neither the location of B nor the odd group. //
Example:
Observed group boundaries; relevant subsequence marked by […]
[ ] |
0 0 Y 1 1 Z 2 3 3
| | | |
Segment boundaries
Possibility #1: Y=0, Z=2
Possibility #2: Y=1, Z=2
Possibility #3: Y=1, Z=1
As a consequence of this claim, the algorithm, regardless of how it works, must narrow the relevant subsequence to one group. By definition, it therefore must observe some group boundaries. The adversary now has the simple task of keeping open as many possibilities as it can.
At any given point during the algorithm's execution, the adversary is internally committed to one possibility for each memory location outside of the relevant subsequence. At the beginning, the relevant subsequence is the entire input, so there are no initial commitments. Whenever the algorithm probes an uncommitted location of xj, the adversary must commit to one of two values: j - 1, or j. If it can avoid letting the jth boundary be observed, it chooses a value that leaves at least half of the remaining possibilities (with respect to observation). Otherwise, it chooses so as to keep at least half of the groups in the relevant interval and commits values for the others.
In this way, the adversary forces the algorithm to observe at least log2 (k + 1) group boundaries, and in observing the jth group boundary, the algorithm is forced to make at least log2 (k + 1) probes.
Extensions:
This result extends straightforwardly to randomized algorithms by randomizing the input, replacing "at best halved" (from the algorithm's point of view) with "at best halved in expectation", and applying standard concentration inequalities.
It also extends to the case where no group can be larger than s copies; in this case the lower bound is Ω(log n log s).
A sorted array suggests a binary search. We have to redefine equality and comparison. Equality simple means an odd number of elements. We can do comparison by observing the index of the first or last element of the group. The first element will be an even index (0-based) before the odd group, and an odd index after the odd group. We can find the first and last elements of a group using binary search. The total cost is O((log N)²).
PROOF OF O((log N)²)
T(2) = 1 //to make the summation nice
T(N) = log(N) + T(N/2) //log(N) is finding the first/last elements
For some N=2^k,
T(2^k) = (log 2^k) + T(2^(k-1))
= (log 2^k) + (log 2^(k-1)) + T(2^(k-2))
= (log 2^k) + (log 2^(k-1)) + (log 2^(k-2)) + ... + (log 2^2) + 1
= k + (k-1) + (k-2) + ... + 1
= k(k+1)/2
= (k² + k)/2
= (log(N)² + log(N))/ 2
= O(log(N)²)
Look at the middle element of the array. With a couple of appropriate binary searches, you can find the first and its last appearance in the array. E.g., if the middle element is 'a', you need to find i and j as shown below:
[* * * * a a a a * * *]
^ ^
| |
| |
i j
Is j - i an even number? You are done! Otherwise (and this is the key here), the question to ask is i an even or an odd number? Do you see what this piece of knowledge implies? Then the rest is easy.
This answer is in support of the answer posted by "throwawayacct". He deserves the bounty. I spent some time on this question and I'm totally convinced that his proof is correct that you need Ω(log(n)^2) queries to find the number that occurs an odd number of times. I'm convinced because I ended up recreating the exact same argument after only skimming his solution.
In the solution, an adversary creates an input to make life hard for the algorithm, but also simple for a human analyzer. The input consists of k pages that each have k entries. The total number of entries is n = k^2, and it is important that O(log(k)) = O(log(n)) and Ω(log(k)) = Ω(log(n)). To make the input, the adversary makes a string of length k of the form 00...011...1, with the transition in an arbitrary position. Then each symbol in the string is expanded into a page of length k of the form aa...abb...b, where on the ith page, a=i and b=i+1. The transition on each page is also in an arbitrary position, except that the parity agrees with the symbol that the page was expanded from.
It is important to understand the "adversary method" of analyzing an algorithm's worst case. The adversary answers queries about the algorithm's input, without committing to future answers. The answers have to be consistent, and the game is over when the adversary has been pinned down enough for the algorithm to reach a conclusion.
With that background, here are some observations:
1) If you want to learn the parity of a transition in a page by making queries in that page, you have to learn the exact position of the transition and you need Ω(log(k)) queries. Any collection of queries restricts the transition point to an interval, and any interval of length more than 1 has both parities. The most efficient search for the transition in that page is a binary search.
2) The most subtle and most important point: There are two ways to determine the parity of a transition inside a specific page. You can either make enough queries in that page to find the transition, or you can infer the parity if you find the same parity in both an earlier and a later page. There is no escape from this either-or. Any set of queries restricts the transition point in each page to some interval. The only restriction on parities comes from intervals of length 1. Otherwise the transition points are free to wiggle to have any consistent parities.
3) In the adversary method, there are no lucky strikes. For instance, suppose that your first query in some page is toward one end instead of in the middle. Since the adversary hasn't committed to an answer, he's free to put the transition on the long side.
4) The end result is that you are forced to directly probe the parities in Ω(log(k)) pages, and the work for each of these subproblems is also Ω(log(k)).
5) Things are not much better with random choices than with adversarial choices. The math is more complicated, because now you can get partial statistical information, rather than a strict yes you know a parity or no you don't know it. But it makes little difference. For instance, you can give each page length k^2, so that with high probability, the first log(k) queries in each page tell you almost nothing about the parity in that page. The adversary can make random choices at the beginning and it still works.
Start at the middle of the array and walk backward until you get to a value that's different from the one at the center. Check whether the number above that boundary is at an odd or even index. If it's odd, then the number occurring an odd number of times is to the left, so repeat your search between the beginning and the boundary you found. If it's even, then the number occurring an odd number of times must be later in the array, so repeat the search in the right half.
As stated, this has both a logarithmic and a linear component. If you want to keep the whole thing logarithmic, instead of just walking backward through the array to a different value, you want to use a binary search instead. Unless you expect many repetitions of the same numbers, the binary search may not be worthwhile though.
I have an algorithm which works in log(N/C)*log(K), where K is the length of maximum same-value range, and C is the length of range being searched for.
The main difference of this algorithm from most posted before is that it takes advantage of the case where all same-value ranges are short. It finds boundaries not by binary-searching the entire array, but by first quickly finding a rough estimate by jumping back by 1, 2, 4, 8, ... (log(K) iterations) steps, and then binary-searching the resulting range (log(K) again).
The algorithm is as follows (written in C#):
// Finds the start of the range of equal numbers containing the index "index",
// which is assumed to be inside the array
//
// Complexity is O(log(K)) with K being the length of range
static int findRangeStart (int[] arr, int index)
{
int candidate = index;
int value = arr[index];
int step = 1;
// find the boundary for binary search:
while(candidate>=0 && arr[candidate] == value)
{
candidate -= step;
step *= 2;
}
// binary search:
int a = Math.Max(0,candidate);
int b = candidate+step/2;
while(a+1!=b)
{
int c = (a+b)/2;
if(arr[c] == value)
b = c;
else
a = c;
}
return b;
}
// Finds the index after the only "odd" range of equal numbers in the array.
// The result should be in the range (start; end]
// The "end" is considered to always be the end of some equal number range.
static int search(int[] arr, int start, int end)
{
if(arr[start] == arr[end-1])
return end;
int middle = (start+end)/2;
int rangeStart = findRangeStart(arr,middle);
if((rangeStart & 1) == 0)
return search(arr, middle, end);
return search(arr, start, rangeStart);
}
// Finds the index after the only "odd" range of equal numbers in the array
static int search(int[] arr)
{
return search(arr, 0, arr.Length);
}
Take the middle element e. Use binary search to find the first and last occurrence. O(log(n))
If it is odd return e.
Otherwise, recurse onto the side that has an odd number of elements [....]eeee[....]
Runtime will be log(n) + log(n/2) + log(n/4).... = O(log(n)^2).
AHhh. There is an answer.
Do a binary search and as you search, for each value, move backwards until you find the first entry with that same value. If its index is even, it is before the oddball, so move to the right.
If its array index is odd, it is after the oddball, so move to the left.
In pseudocode (this is the general idea, not tested...):
private static int FindOddBall(int[] ary)
{
int l = 0,
r = ary.Length - 1;
int n = (l+r)/2;
while (r > l+2)
{
n = (l + r) / 2;
while (ary[n] == ary[n-1])
n = FindBreakIndex(ary, l, n);
if (n % 2 == 0) // even index we are on or to the left of the oddball
l = n;
else // odd index we are to the right of the oddball
r = n-1;
}
return ary[l];
}
private static int FindBreakIndex(int[] ary, int l, int n)
{
var t = ary[n];
var r = n;
while(ary[n] != t || ary[n] == ary[n-1])
if(ary[n] == t)
{
r = n;
n = (l + r)/2;
}
else
{
l = n;
n = (l + r)/2;
}
return n;
}
You can use this algorithm:
int GetSpecialOne(int[] array, int length)
{
int specialOne = array[0];
for(int i=1; i < length; i++)
{
specialOne ^= array[i];
}
return specialOne;
}
Solved with the help of a similar question which can be found here on http://www.technicalinterviewquestions.net
We don't have any information about the distribution of lenghts inside the array, and of the array as a whole, right?
So the arraylength might be 1, 11, 101, 1001 or something, 1 at least with no upper bound, and must contain at least 1 type of elements ('number') up to (length-1)/2 + 1 elements, for total sizes of 1, 11, 101: 1, 1 to 6, 1 to 51 elements and so on.
Shall we assume every possible size of equal probability? This would lead to a middle length of subarrays of size/4, wouldn't it?
An array of size 5 could be divided into 1, 2 or 3 sublists.
What seems to be obvious is not that obvious, if we go into details.
An array of size 5 can be 'divided' into one sublist in just one way, with arguable right to call it 'dividing'. It's just a list of 5 elements (aaaaa). To avoid confusion let's assume the elements inside the list to be ordered characters, not numbers (a,b,c, ...).
Divided into two sublist, they might be (1, 4), (2, 3), (3, 2), (4, 1). (abbbb, aabbb, aaabb, aaaab).
Now let's look back at the claim made before: Shall the 'division' (5) be assumed the same probability as those 4 divisions into 2 sublists? Or shall we mix them together, and assume every partition as evenly probable, (1/5)?
Or can we calculate the solution without knowing the probability of the length of the sublists?
The clue is you're looking for log(n). That's less than n.
Stepping through the entire array, one at a time? That's n. That's not going to work.
We know the first two indexes in the array (0 and 1) should be the same number. Same with 50 and 51, if the odd number in the array is after them.
So find the middle element in the array, compare it to the element right after it. If the change in numbers happens on the wrong index, we know the odd number in the array is before it; otherwise, it's after. With one set of comparisons, we figure out which half of the array the target is in.
Keep going from there.
Use a hash table
For each element E in the input set
if E is set in the hash table
increment it's value
else
set E in the hash table and initialize it to 0
For each key K in hash table
if K % 2 = 1
return K
As this algorithm is 2n it belongs to O(n)
Try this:
int getOddOccurrence(int ar[], int ar_size)
{
int i;
int xor = 0;
for (i=0; i < ar_size; i++)
xor = xor ^ ar[i];
return res;
}
XOR will cancel out everytime you XOR with the same number so 1^1=0 but 1^1^1=1 so every pair should cancel out leaving the odd number out.
Assume indexing start at 0. Binary search for the smallest even i such that x[i] != x[i+1]; your answer is x[i].
edit: due to public demand, here is the code
int f(int *x, int min, int max) {
int size = max;
min /= 2;
max /= 2;
while (min < max) {
int i = (min + max)/2;
if (i==0 || x[2*i-1] == x[2*i])
min = i+1;
else
max = i-1;
}
if (2*max == size || x[2*max] != x[2*max+1])
return x[2*max];
return x[2*min];
}

Resources