Two arrays and number-- best algorithm - arrays

This is a question I got in a job interview:
You are given two sorted arrays (sizes n and m), and a number x. What would be the best algorithm to find the indexes of two numbers (one from each array), that their sum equals the given number.
I couldn't find a better answer than the naive solution which is:
Start from the smaller array, from the cell that contains the largest number which is smaller than x.
For each cell in small array. do binary search on the big one, looking for the number so the sum will equal x.
Continue until the first cell of the smaller array, returning the appropriate indexes.
Return FALSE if no such numbers exist.
Can anyone think of a better solution in terms of runtime?

Use two indices i1,i2 - set i1=0, i2=n-1
while i1 < m && i2>=0:
if arr1[i1] + arr2[i2] == SUM:
return i1,i2
else if arr1[i1] + arr2[i2] > SUM:
i2--
else
i1++
return no pair found
The idea is to use the fact that the array is sorted, so start from the two edges of the arrays, and at each iteration, make changes so you will get closer to the desired element
Complexity is O(n+m) under worst case analysis, which is better than binary search approach if min{m,n} >= log(max{m,n})
Proof of correctness (guidelines):
Assume the answer is true with indices k1,k2.
Then, for each i2>k2- arr1[k1] + arr2[i2] > SUM - and you will NOT change i1 after reaching it before getting to i2==k2. Similarly you can show that when you get to i2==k2, you will NOT change i2 before you get i1==k1.
Since we linearly scan the arrays - one of i1 or i2 will get to k1 or k2 at some point, and then - you will continue iterating until you set the other iterator to the correct location, and find the answer.
QED
Notes:
If you want to output ALL elements that matches the desired sum, when arr1[i1]+arr2[i2] ==SUM, change the element with the LOWER absolute difference to the next element in the iteration order. It will make sure you output all desired pairs.
Note that this solution might fail for duplicate elements. As is, the solution works if there is no pair (x,y) such that x AND y both have dupes.
To handle this case, you will need to 'go back up' once you have exhausted all possible pairs in one direction, and the pseudo code should be updated to:
dupeJump = -1
while i1 < m && i2>=0:
if arr1[i1] + arr2[i2] == SUM:
yield i1,i2
if arr1[i1+1] == arr1[i1] AND arr2[i2-1] == arr2[i2]:
//remembering where we were in case of double dupes
if (dupeJump == -1):
dupeJump = i2
i2--
else:
if abs(arr1[i1+1] - arr1[i1]) < abs(arr2[i2-1] - arr2[i2]):
i1++
else:
i2--
//going back up, because there are more pairs to print due to dupes
if (dupeJump != -1):
i2 = dupeJump
dupeJump = -1
else if arr1[i1] + arr2[i2] > SUM:
i2--
else
i1++
Note however that the time complexity might increase to O(n+m+size(output)), because there could O(n*m) such pairs and you need to output all of them (note that every correct solution will have this restriction).

Related

Efficiently finding an element in in an array where consecutive elements differ by +1/0/-1

I have this problem, that I feel I am vastly overcomplicating. I feel like this should be incredibly basic, but I am stumbling on a mental block.
The question reads as follows:
Given an array of integers A[1..n], such that A[1] ≤ A[n] and for all
i, 1 ≤ i < n, we have |A[i] − A[i+ 1]| ≤ 1. Devise an semi-efficient
algorithm (better in the worst case then the native case of looking at
every cell in the array) to find any j such that A[j] = z for a given
value of z, A[1] ≤ z ≤ A[n].
My understanding of the given array is as follows: You have an array that is 1-indexed where the first element of the array is smaller than or equal to the last element of the array. Each element of the array is with in 1 of the previous one (So A[2] could be -1, 0, or +1 of A[1]'s value).
I have had several solutions to this question all of which have had there issues, here is an example of one to show my thought process.
i = 2
while i <= n {
if (A[i] == x) then
break // This can be changed into a less messy case where
// I don't use break, but this is a rough concept
else if (abs(A[i] - j) <= 1) then
i--
else
i += 2
}
This however fails when most of the values inside the array are repeating.
An array of [1 1 1 1 1 1 1 1 1 1 2] where searching for 2 for example, it would run forever.
Most of my attempted algorithms follow a similar concept of incrementing by 2, as that seems like the most logical approach when dealing with with an array that is increasing by a maximum of 1, however, I am struggling to find any that would work in a case such as [1 1 1 1 1 1 1 1 1 1 2] as they all either fail, or match the native worst case of n.
I am unsure if I am struggling because I don't understand what the question is asking, or if I am simply struggling to to put together an algorithm.
What would an algorithm look like that fits the requirements?
This can be solved via a form of modified binary search. The most important premises:
the input array always contains the element
distance between adjacent elements is always 1
there's always an increasingly ordered subarray containing the searched value
Taking it from there we can apply two strategies:
divide and conquer: we can reduce the range searched by half, since we always know which subarray will definitely contain the specified value as a part of an increasing sequence.
limiting the search-range: suppose the searched value is 3 and the limiting value on the right half of the range is 6, we can then shift the right limit to the left by 3 cells.
As code (pythonesque, but untested):
def search_semi_binary(arr, val):
low, up = 0, len(arr) - 1
while low != up:
# reduce search space
low += abs(val - arr[low])
up -= abs(val - arr[up])
# binary search
mid = (low + up) // 2
if arr[mid] == val:
return mid
elif val < arr[mid]:
# value is definitely in the lower part of the array
up = mid - 1
else:
# value is definitely in the upper part of the array
low = mid + 1
return low
The basic idea consists of two parts:
First we can reduce the search space. This uses the fact that adjacent cells of the array may only differ by one. I.e. if the lower bound of our search space has an absolute difference of 3 to val, we can shift the lower bound to the right by at least three without shifting the value out of the search window. Same applies to the upper bound.
The next step follows the basic principle of binary search using the following loop-invariant:
At the start of each iteration there exists an array-element in arr[low:up + 1] that is equal to val and arr[low] <= val <= arr[up]. This is also guaranteed after applying the search-space reduction. Depending on how mid is chosen, one of three cases can happen:
arr[mid] == val: in this case, the searched index is found
arr[mid] < val: In this case arr[mid] < val <= arr[up] must hold due to the assumption of an initial valid state
arr[mid] > val: analogous for arr[mid] > val >= arr[low]
For the latter two cases, we can pick low = mid + 1 (or up = mid - 1 respectively) and start the next iteration.
In the worst case, you'll have to look at all array elements.
Assume all elements are zero, except that a[k] = 1 for one single k, 1 ≤ k ≤ n. k isn't known, obviously. And you look for the value 1. Until you visit a[k], whatever you visit has a value of 0. Any element that you haven't visited could be equal to 1.
Let's say we are looking for a number 5. If they array starts with A[1]=1, the best case scenario is having the 5 in A[5] as it needs to be incremented at least 4 times. If A[5] = 3, then let's check A[7] as it's the closest possible solution. How do we decide it's A[7]? From the number we are looking for, let's call it R for result, we subtract what we currently have, let's call it C for current, and add the result to i as in A[i+(R-C)]
Unfortunately the above solution would apply to every scenario but the worst case scenario (when we iterate through the whole array).

Maximize number of inversion count in array

We are given an unsorted array A of integers (duplicates allowed) with size N possibly large. We can count the number of pairs with indices i < j, for which A[i] < A[j], let's call this X.
We can change maximum one element from the array with a cost equal to the difference in absolute values (for instance, if we replace element on index k with the new number K, the cost Y is | A[k] - K |).
We can only replace this element with other elements found in the array.
We want to find the minimum possible value of X + Y.
Some examples:
[1,2,2] should return 1 (change the 1 to 2 such that the array becomes [2,2,2])
[2,2,3] should return 1 (change the 3 to 2)
[2,1,1] should return 0 (because no changes are necessary)
[1,2,3,4] should return 6 (this is already the minimum possible value)
[4,4,5,5] should return 3 (this can accomplished by changing the first 4 into a 5 or the last 5 in a 4)
The number of pairs can be found with a naive O(n²) solution, here in Python:
def calc_x(arr):
n = len(arr)
cnt = 0
for i in range(n):
for j in range(i+1, n):
if arr[j] > arr[i]:
cnt += 1
return cnt
A brute-force solution is easily written as for example:
def f(arr):
best_val = calc_x(arr)
used = set(arr)
for i, v in enumerate(arr):
for replacement in used:
if replacement == v:
continue
arr2 = arr[0:i] + replacement + arr[i:]
y = abs(replacement - v)
x = calc_x(arr2)
best_val = min(best_val, x + y)
return best_val
We can count for each element the number of items right of it larger than itself in O(n*log(n)) using for instance an AVL-tree or some variation on merge sort.
However, we still have to search which element to change and what improvement it can achieve.
This was given as an interview question and I would like some hints or insights as how to solve this problem efficiently (data structures or algorithm).
Definitely go for a O(n log n) complexity when counting inversions.
We can see that when you change a value at index k, you can either:
1) increase it, and then possibly reduce the number of inversions with elements bigger than k, but increase the number of inversions with elements smaller than k
2) decrease it (the opposite thing happens)
Let's try not to count x every time you change a value. What do you need to know?
In case 1):
You have to know how many elements on the left are smaller than your new value v and how many elements on the right are bigger than your value. You can pretty easily check that in O (n). So what is your x now? You can count it with the following formula:
prev_val - your previous value
prev_x - x that you've counted at the beginning of your program
prev_l - number of elements on the left smaller than prev_val
prev_r - number of elements on the right bigger than prev_val
v - new value
l - number of elements on the right smaller than v
r - number of elements on the right bigger than v
new_x = prev_x + r + l - prev_l - prev_r
In the second case you pretty much do the opposite thing.
Right now you get something like O( n^3 ) instead of O (n^3 log n), which is probably still bad. Unfortunately that's all what I came up for now. I'll definitely tell you if I come up with sth better.
EDIT: What about memory limit? Is there any? If not, you can just for each element in the array make two sets with elements before and after the current one. Then you can find the amount of smaller/bigger in O (log n), making your time complexity O (n^2 log n).
EDIT 2: We can also try to check, what element would be the best to change to a value v, for every possible value v. You can make then two sets and add/erase elements from them while checking for every element, making the time complexity O(n^2 log n) without using too much space. So the algorithm would be:
1) determine every value v that you can change any element, calculate x
2) for each possible value v:
make two sets, push all elements into the second one
for each element e in array:
add previous element (if there's any) to the first set and erase element e from the second set, then count number of bigger/smaller elements in set 1 and 2 and calculate new x
EDIT 3: Instead of making two sets, you could go with prefix sum for a value. That's O (n^2) already, but I think we can go even better than this.

Possibly simpler O(n) solution to find the Sub-array of length K (or more) with the maximum average

I saw this question on a coding competition site.
Suppose you are given an array of n integers and an integer k (n<= 10^5, 1<=k<=n). How to find the sub-array(contiguous) with maximum average whose length is more than k.
There's an O(n) solution presented in research papers(arxiv.org/abs/cs/0207026.), linked in a duplicate SO question. I'm posting this as a separate question since I think I have a similar method with a simpler explanation. Do you think there's anything wrong with my logic in the solution below?
Here's the logic:
Start with the range of window as [i,j] = [0,K-1]. Then iterate over remaining elements.
For every next element, j, update the prefix sum**. Now we have a choice - we can use the full range [i,j] or discard the range [i:j-k] and keep [j-k+1:j] (i.e keep the latest K elements). Choose the range with the higher average (use prefix sum to do this in O(1)).
Keep track of the max average at every step
Return the max avg at the end
** I calculate the prefix sum as I iterate over the array. The prefix sum at i is the cumulative sum of the first i elements in the array.
Code:
def findMaxAverage(nums, k):
prefix = [0]
for i in range(k):
prefix.append(float(prefix[-1] + nums[i]))
mavg = prefix[-1]/k
lbound = -1
for i in range(k,len(nums)):
prefix.append(prefix[-1] + nums[i])
cavg = (prefix[i+1] - prefix[lbound+1])/(i-lbound)
altavg = (prefix[i+1] - prefix[i-k+1])/k
if altavg > cavg:
lbound = i-k
cavg = altavg
mavg = max(mavg, cavg)
return mavg
Consider k = 3 and sequence
3,0,0,2,0,1,3
Output of your program is 1.3333333333333333. It has found subsequence 0,1,3, but the best possible subsequence is 2,0,1,3 with average 1.5.

Smallest sum of subarray with sum greater than a given value

Input: Array of N positive numbers and a value X such that N is small compared to X
Output: Subarray with sum of all its numbers equal to Y > X, such that there is no other subarray with sum of its numbers bigger than X but smaller than Y.
Is there a polynomial solution to this question? If so, can you present it?
As the other answers indicate this is a NP-Complete problem which is called the "Knapsack Problem". So there is no polynomial solution. But it has a pseudo polynomial time algorithm. This explains what pseudo polynomial is.
A visual explanation of the algorithm.
And some code.
If this is work related (I met this problem a few times already, in various disguises) I suggest introducing additional restrictions to simplify it. If it was a general question you may want to check other NP-Complete problems as well. One such list.
Edit 1:
AliVar made a good point. The given problem searches for Y > X and the knapsack problem searches for Y < X. So the answer for this problem needs a few more steps. When we are trying to find the minimum sum where Y > X we are also looking for the maximum sum where S < (Total - X). The second part is the original knapsack problem. So;
Find the total
Solve knapsack for S < (Total - X)
Subtrack the list of items in knapsack solution from the original input.
This should give you the minimum Y > X
Let A be our array. Here is a O(X*N) algorithm:
initialize set S = {0}
initialize map<int, int> parent
best_sum = inf
best_parent = -1
for a in A
Sn = {}
for s in S
t = s + a
if t > X and t < best_sum
best_sum = t
best_parent = s
end if
if t <= X
Sn.add(t)
parent[t] = s
end if
end for
S = S unite with Sn
end for
To print the elements in the best sum print the numbers:
Subarray = {best_sum - best_parent}
t = best_parent
while t in parent.keys()
Subarray.add(t-parent[t])
t = parent[t]
end while
print Subarray
The idea is similar to the idea of dynamic programming. We just calculate all reachable (those that could be obtained as a subarray sum) sums that are less than X. For each element a in the array A you could either choose to participate in the sum or not. At the update step S = S unite with Sn S represent all sums in which a does not participate while Sn all sum in which a do participate.
You could represent S as a boolean array setting a item true if this item is in the set. Note that the length of this boolean array would be at most X.
Overall, the algorithm is O(X*N) with memory usage O(X).
I think this problem is NP-hard and the subset sum can be reduced to it. Here is my reduction:
For an instance of the subset sum with set S={x1,...,xn} it is desired to find a subset with sum t. Suppose d is the minimum distance between two non-equal xi and xj. Build S'={x1+d/n,...,xn+d/n} and feed it to your problem. Suppose that your problem found an answer; i.e. a subset D' of S' with sum Y>t which is the smallest sum with this property. Name the set of original members of D' as D. Three cases may happen:
1) Y = t + |D|*d/n which means D is the solution to the original subset sum problem.
2) Y > t + |D|*d/n which means no answer set can be found for the original problem.
3) Y < t + |D|*d/n. In this case assign t=Y and repeat the problem. Since the value for the new t is increased, this case will not repeat exponentially. Therefore, the procedure terminates in polynomial time.

Median of Lists

I was asked this question:
You are given two lists of integers, each of which is sorted in ascending order and each of which has length n. All integers in the two lists are different. You wish to find the n-th smallest element of the union of the two lists. (That is, if you concatenated the lists and sorted the resulting list in ascending order, the element which would be at the n-th position.)
My Solution:
Assume that lists are 0-indexed.
O(n) solution:
A straight-forward solution is to observe that the arrays are already sorted,so we can merge them, and stop after n steps. The first n-1 elements do not need to be copied
into a new array, so this solution takes O(n) time and O(1) memory.
O(log2 n) solution:
The O(log2 n) solution uses alternates binary search on each list. In short, it takes the middle element in the current search interval in the first list (l1[p1]) and searches for it in l2. Since the elements are unique, we will find at most 2 values closest to l1[p1]. Depending on their values relative to l1[p1-1] and l1[p1 + 1] and their indices p21 and p22, we either return the n-th element or recurse: If the sum of any out of the (at most) 3 indices in l1 can be combined with one of the (at most) 2 indices in l2 so that l1[p1'] and l2[p2'] would be right next to each other in the sorted union of the two lists and p1' + p2' = n or p1' + p2' = n + 1, we return one of the 5 elements. If p1 + p2 > n, we recurse to left half of the search interval in l1, otherwise we recurse to the right interval. This way, for out of the O(log n) possible midpoints in l1 we do an O(log n) binary search in l2. Therefore the running time is O(log2 n).
O(log n) solution:
Assuming the lists l1 and l2 have constant access time to any of their elements, we
can use a modified version of binary search to get an O(log n) solution. The easiest approach is to search for an index p1 in just one of the lists and calculate the corresponding index p2 in the other list so that p1 + p2 = n at all times. (This assumes the lists are indexed from 1.)
First we check for the special case when all elements of one list are smaller than any element in the other list:
If l1[n] < l2[0] return l1[n].
If l2[n] < l1[0] return l2[n].
If we do not find the n-th smallest element after this step, call findNth(1,n) with the approximate pseudocode:
findNth(start,end)
p1 = (start + end)/2
p2 = n-p1
if l1[p1] < l2[p2]:
if l1[p1 + 1] > l2[p2]:
return l2[p2]
else:
return findNth(p1+1, end)
else:
if l2[p2 + 1] > l1[p1]:
return l1[p1]
else:
return findNth(start,p1-1)
Element l2[p2] is returned when l2[p2] is greater than exactly p1 + p2-1 = n-1 elements
(and therefore is the n-th smallest). l1[p1] is returned under the same but symmetric conditions. If l1[p1] < l2[p2] and l1[p1+1] < l2[p2], the rank of l2[p2] is greater than n, so we need to take more elements from l1 and less from l2. Therefore we search for p1 in the upper half of the previous search interval. On the other hand, if l2[p2] < l1[p1] and l2[p2 + 1] < l1[p1], the rank of l1[p1] is greater than n. Therefore the real p1 will lie in the bottom half of our current search interval.Since we are halving the size of the problem at each call to findNth and we need to do only constant work to halve the problem size, the recurrence for this algorithm is T(n) = T(n/2) +O(1), which has an O(log n)-time solution.
Interviewer continually ask me different approaches for above problem.I have proposed above three approaches.Is they are correct?Is there any other best possible solution for this question? Actually this question asked lot of times so please provide some good stuff about it.
Not sure if you took a look at this: http://www.leetcode.com/2011/01/find-k-th-smallest-element-in-union-of.html
That solve a more generalized version of the problem you are asking about. Definitely log complexity is possible...
I think this will be the best solution . .
->1 2 3 4 5 6 7 8 9
->10 11 12 13 14 15 16 17 18
take two pointers i and j each pointing at start of arrays, increment i if a[i]< b[j]
increment j if a[i]>b[j]
do this n times.
linear O(n) O(1) space solution.

Resources