2D peak finding algorithm in O(n) worst case time? - arrays

I was doing this course on algorithms from MIT. In the very first lecture the professor presents the following problem:-
A peak in a 2D array is a value such that all it's 4 neighbours are less than or equal to it, ie. for
a[i][j] to be a local maximum,
a[i+1][j] <= a[i][j]
&& a[i-1][j] <= a[i][j]
&& a[i][j+1] <= a[i][j]
&& a[i+1][j-1] <= a[i][j]
Now given an NxN 2D array, find a peak in the array.
This question can be easily solved in O(N^2) time by iterating over all the elements and returning a peak.
However it can be optimized to be solved in O(NlogN) time by using a divide and conquer solution as explained here.
But they have said that there exists an O(N) time algorithm that solves this problem. Please suggest how can we solve this problem in O(N) time.
PS(For those who know python) The course staff has explained an approach here (Problem 1-5. Peak-Finding Proof) and also provided some python code in their problem sets. But the approach explained is totally non-obvious and very hard to decipher. The python code is equally confusing. So I have copied the main part of the code below for those who know python and can tell what algorithm is being used from the code.
def algorithm4(problem, bestSeen = None, rowSplit = True, trace = None):
# if it's empty, we're done
if problem.numRow <= 0 or problem.numCol <= 0:
return None
subproblems = []
divider = []
if rowSplit:
# the recursive subproblem will involve half the number of rows
mid = problem.numRow // 2
# information about the two subproblems
(subStartR1, subNumR1) = (0, mid)
(subStartR2, subNumR2) = (mid + 1, problem.numRow - (mid + 1))
(subStartC, subNumC) = (0, problem.numCol)
subproblems.append((subStartR1, subStartC, subNumR1, subNumC))
subproblems.append((subStartR2, subStartC, subNumR2, subNumC))
# get a list of all locations in the dividing column
divider = crossProduct([mid], range(problem.numCol))
else:
# the recursive subproblem will involve half the number of columns
mid = problem.numCol // 2
# information about the two subproblems
(subStartR, subNumR) = (0, problem.numRow)
(subStartC1, subNumC1) = (0, mid)
(subStartC2, subNumC2) = (mid + 1, problem.numCol - (mid + 1))
subproblems.append((subStartR, subStartC1, subNumR, subNumC1))
subproblems.append((subStartR, subStartC2, subNumR, subNumC2))
# get a list of all locations in the dividing column
divider = crossProduct(range(problem.numRow), [mid])
# find the maximum in the dividing row or column
bestLoc = problem.getMaximum(divider, trace)
neighbor = problem.getBetterNeighbor(bestLoc, trace)
# update the best we've seen so far based on this new maximum
if bestSeen is None or problem.get(neighbor) > problem.get(bestSeen):
bestSeen = neighbor
if not trace is None: trace.setBestSeen(bestSeen)
# return when we know we've found a peak
if neighbor == bestLoc and problem.get(bestLoc) >= problem.get(bestSeen):
if not trace is None: trace.foundPeak(bestLoc)
return bestLoc
# figure out which subproblem contains the largest number we've seen so
# far, and recurse, alternating between splitting on rows and splitting
# on columns
sub = problem.getSubproblemContaining(subproblems, bestSeen)
newBest = sub.getLocationInSelf(problem, bestSeen)
if not trace is None: trace.setProblemDimensions(sub)
result = algorithm4(sub, newBest, not rowSplit, trace)
return problem.getLocationInSelf(sub, result)
#Helper Method
def crossProduct(list1, list2):
"""
Returns all pairs with one item from the first list and one item from
the second list. (Cartesian product of the two lists.)
The code is equivalent to the following list comprehension:
return [(a, b) for a in list1 for b in list2]
but for easier reading and analysis, we have included more explicit code.
"""
answer = []
for a in list1:
for b in list2:
answer.append ((a, b))
return answer

Let's assume that width of the array is bigger than height, otherwise we will split in another direction.
Split the array into three parts: central column, left side and right side.
Go through the central column and two neighbour columns and look for maximum.
If it's in the central column - this is our peak
If it's in the left side, run this algorithm on subarray left_side + central_column
If it's in the right side, run this algorithm on subarray right_side + central_column
Why this works:
For cases where the maximum element is in the central column - obvious. If it's not, we can step from that maximum to increasing elements and will definitely not cross the central row, so a peak will definitely exist in the corresponding half.
Why this is O(n):
step #3 takes less than or equal to max_dimension iterations and max_dimension at least halves on every two algorithm steps. This gives n+n/2+n/4+... which is O(n). Important detail: we split by the maximum direction. For square arrays this means that split directions will be alternating. This is a difference from the last attempt in the PDF you linked to.
A note: I'm not sure if it exactly matches the algorithm in the code you gave, it may or may not be a different approach.

To see thata(n):
Calculation step is in the picture
To see algorithm implementation:
1) start with either 1a) or 1b)
1a) set left half, divider, right half.
1b) set top half, divider, bottom half.
2) Find global maximum on the divider. [theta n]
3) Find the values of its neighbour. And record the largest node ever visited as the bestSeen node. [theta 1]
# update the best we've seen so far based on this new maximum
if bestSeen is None or problem.get(neighbor) > problem.get(bestSeen):
bestSeen = neighbor
if not trace is None: trace.setBestSeen(bestSeen)
4) check if the global maximum is larger than the bestSeen and its neighbour.
[theta 1]
//Step 4 is the main key of why this algorithm works
# return when we know we've found a peak
if neighbor == bestLoc and problem.get(bestLoc) >= problem.get(bestSeen):
if not trace is None: trace.foundPeak(bestLoc)
return bestLoc
5) If 4) is True, return the global maximum as 2-D peak.
Else if this time did 1a), choose the half of BestSeen, go back to step 1b)
Else, choose the half of BestSeen, go back to step 1a)
To see visually why this algorithm works, it is like grabbing the greatest value side, keep reducing the boundaries and eventually get the BestSeen value.
# Visualised simulation
round1
round2
round3
round4
round5
round6
finally
For this 10*10 matrix, we used only 6 steps to search for the 2-D peak, its quite convincing that it is indeed theta n
By Falcon

Here is the working Java code that implements #maxim1000 's algorithm. The following code finds a peak in the 2D array in linear time.
import java.util.*;
class Ideone{
public static void main (String[] args) throws java.lang.Exception{
new Ideone().run();
}
int N , M ;
void run(){
N = 1000;
M = 100;
// arr is a random NxM array
int[][] arr = randomArray();
long start = System.currentTimeMillis();
// for(int i=0; i<N; i++){ // TO print the array.
// System. out.println(Arrays.toString(arr[i]));
// }
System.out.println(findPeakLinearTime(arr));
long end = System.currentTimeMillis();
System.out.println("time taken : " + (end-start));
}
int findPeakLinearTime(int[][] arr){
int rows = arr.length;
int cols = arr[0].length;
return kthLinearColumn(arr, 0, cols-1, 0, rows-1);
}
// helper function that splits on the middle Column
int kthLinearColumn(int[][] arr, int loCol, int hiCol, int loRow, int hiRow){
if(loCol==hiCol){
int max = arr[loRow][loCol];
int foundRow = loRow;
for(int row = loRow; row<=hiRow; row++){
if(max < arr[row][loCol]){
max = arr[row][loCol];
foundRow = row;
}
}
if(!correctPeak(arr, foundRow, loCol)){
System.out.println("THIS PEAK IS WRONG");
}
return max;
}
int midCol = (loCol+hiCol)/2;
int max = arr[loRow][loCol];
for(int row=loRow; row<=hiRow; row++){
max = Math.max(max, arr[row][midCol]);
}
boolean centralMax = true;
boolean rightMax = false;
boolean leftMax = false;
if(midCol-1 >= 0){
for(int row = loRow; row<=hiRow; row++){
if(arr[row][midCol-1] > max){
max = arr[row][midCol-1];
centralMax = false;
leftMax = true;
}
}
}
if(midCol+1 < M){
for(int row=loRow; row<=hiRow; row++){
if(arr[row][midCol+1] > max){
max = arr[row][midCol+1];
centralMax = false;
leftMax = false;
rightMax = true;
}
}
}
if(centralMax) return max;
if(rightMax) return kthLinearRow(arr, midCol+1, hiCol, loRow, hiRow);
if(leftMax) return kthLinearRow(arr, loCol, midCol-1, loRow, hiRow);
throw new RuntimeException("INCORRECT CODE");
}
// helper function that splits on the middle
int kthLinearRow(int[][] arr, int loCol, int hiCol, int loRow, int hiRow){
if(loRow==hiRow){
int ans = arr[loCol][loRow];
int foundCol = loCol;
for(int col=loCol; col<=hiCol; col++){
if(arr[loRow][col] > ans){
ans = arr[loRow][col];
foundCol = col;
}
}
if(!correctPeak(arr, loRow, foundCol)){
System.out.println("THIS PEAK IS WRONG");
}
return ans;
}
boolean centralMax = true;
boolean upperMax = false;
boolean lowerMax = false;
int midRow = (loRow+hiRow)/2;
int max = arr[midRow][loCol];
for(int col=loCol; col<=hiCol; col++){
max = Math.max(max, arr[midRow][col]);
}
if(midRow-1>=0){
for(int col=loCol; col<=hiCol; col++){
if(arr[midRow-1][col] > max){
max = arr[midRow-1][col];
upperMax = true;
centralMax = false;
}
}
}
if(midRow+1<N){
for(int col=loCol; col<=hiCol; col++){
if(arr[midRow+1][col] > max){
max = arr[midRow+1][col];
lowerMax = true;
centralMax = false;
upperMax = false;
}
}
}
if(centralMax) return max;
if(lowerMax) return kthLinearColumn(arr, loCol, hiCol, midRow+1, hiRow);
if(upperMax) return kthLinearColumn(arr, loCol, hiCol, loRow, midRow-1);
throw new RuntimeException("Incorrect code");
}
int[][] randomArray(){
int[][] arr = new int[N][M];
for(int i=0; i<N; i++)
for(int j=0; j<M; j++)
arr[i][j] = (int)(Math.random()*1000000000);
return arr;
}
boolean correctPeak(int[][] arr, int row, int col){//Function that checks if arr[row][col] is a peak or not
if(row-1>=0 && arr[row-1][col]>arr[row][col]) return false;
if(row+1<N && arr[row+1][col]>arr[row][col]) return false;
if(col-1>=0 && arr[row][col-1]>arr[row][col]) return false;
if(col+1<M && arr[row][col+1]>arr[row][col]) return false;
return true;
}
}

Related

Find all unsorted pairs in partially sorted array

I have to find (or atleast count) all pairs of (not necessarily adjacent) unsorted elements in a partially sorted array.
If we assume the sorting to be ascending, the array [1 4 3 2 5] has the following unsorted pairs: (4, 3), (3, 2) and (4, 2).
I'm thinking of an algorithm that works along the lines of insertion sort, as insertion sort tends to compare every new element with all elements which are misplaced with respect to the new element.
Edit: While posting the question, I didn't realise that finding the pairs would have a higher time complexity than counting them. Is there a better possible algorithm that just counts how many such pairs exist?
It depends a little bit on what you mean exactly by "partially sorted" - One could argue that every array is partially sorted to some degree.
Since this algorithm has worst-case complexity O(n^2) anyway (consider the input sorted in descending order), you might as well go down the straight-forward route:
ret = []
for i in range(len(array)):
for j in range(i, len(array)):
if array[i] > array[j]:
ret.append((array[i], array[j]))
return ret
This works very well for random arrays.
However, I suppose what you have in mind is more something that there are larger stretches inside the array where the numbers are sorted but that that's not the case for the array as a whole.
In that case, you can save a bit of time over the naive approach above by first identifying those stretches - this can be done in a linear pass. Once you have them, you only have to compare these stretches with each other, and you can use binary search for that (since the stretches are in sort order).
Here's a Python implementation of what I have in mind:
# find all sorted stretches
stretches = []
begin = 0
for i in range(1, len(array)):
if array[i-1] > array[i]:
stretches.append(array[begin:i])
begin = i
if i+1 > begin:
stretches.append(array[begin:])
# compare stretches
ret = []
for i in range(len(stretches)):
stretchi = stretches[i]
stretchi_rev = None
for j in range(i+1, len(stretches)):
stretchj = stretches[j]
if stretchi[-1] > stretchj[0]:
if stretchi_rev is None:
stretchi_rev = list(reversed(stretchi))
hi = len(stretchj)
for x in stretchi_rev:
i = bisect.bisect_left(stretchj, x, 0, hi)
if i == 0:
break
else:
for y in stretchj[:i]:
ret.append((x, y))
hi = i
return ret
For random arrays, this will be slower than the first approach. But if the array is big, and the amount of partially sorted portions is high enough, this algorithm will at some point starting to beat the brute-force search.
As suggested by #SomeDude in the comments, if you just need to count pairs there's an O(nlogn) solution based on building a binary search tree. There are some subtleties involved - we need to keep track of the number of duplicates (ic) at each node, and for performance reasons we also keep track of the number of right children (rc).
The basic scheme for inserting a value v intro the tree rooted at node n is:
def insert(n, v)
if v < n.data
count = 1 + n.ic + n.rc
if n.left is null
n.left = node(v)
return count
return count + insert(n.left, v)
else if v > n.data
if n.right is null
n.right = node(v)
n.rc = 1
return 0
n.rc += 1
return insert(n.right, v)
else // v == n.data
n.ic += 1
return n.rc
And here's some functioning Java code (Ideone):
static int pairsCount(Integer[] arr) {
int count = 0;
Node root = new Node(arr[0]);
for(int i=1; i<arr.length; i++)
count += insert(root, arr[i]);
return count;
}
static int insert(Node n, int v) {
if(v < n.value) {
int count = 1 + n.rc + n.ic;
if(n.left == null) {
n.left = new Node(v);
return count;
}
return count + insert(n.left, v);
}
else if(v > n.value) {
if(n.right == null) {
n.right = new Node(v);
n.rc = 1;
return 0;
}
n.rc += 1;
return insert(n.right, v);
}
else {
n.ic += 1;
return n.rc;
}
}
static class Node {
int value;
Node left, right;
int rc; // right children count
int ic; // duplicate count
Node(int value) {
this.value = value;
}
}
Test:
Integer[] arr = {1, 4, 3, 2, 5};
System.out.println(pairsCount(arr));
Output:
3

The longest sub-array with switching elements

An array is called "switching" if the odd and even elements are equal.
Example:
[2,4,2,4] is a switching array because the members in even positions (indexes 0 and 2) and odd positions (indexes 1 and 3) are equal.
If A = [3,7,3,7, 2, 1, 2], the switching sub-arrays are:
== > [3,7,3,7] and [2,1,2]
Therefore, the longest switching sub-array is [3,7,3,7] with length = 4.
As another example if A = [1,5,6,0,1,0], the the only switching sub-array is [0,1,0].
Another example: A= [7,-5,-5,-5,7,-1,7], the switching sub-arrays are [7,-1,7] and [-5,-5,-5].
Question:
Write a function that receives an array and find its longest switching sub-array.
I would like to know how you solve this problem and which strategies you use to solve this with a good time complexity?
I am assuming that the array is zero indexed .
if arr.size <= 2
return arr.size
else
ans = 2
temp_ans = 2 // We will update ans when temp_ans > ans;
for i = 2; i < arr.size ; ++i
if arr[i] = arr[i-2]
temp_ans = temp_ans + 1;
else
temp_ans = 2;
ans = max(temp_ans , ans);
return ans;
I think this should work and I don't think it needs any kind of explanation .
Example Code
private static int solve(int[] arr){
if(arr.length == 1) return 1;
int even = arr[0],odd = arr[1];
int start = 0,max_len = 0;
for(int i=2;i<arr.length;++i){
if(i%2 == 0 && arr[i] != even || i%2 == 1 && arr[i] != odd){
max_len = Math.max(max_len,i - start);
start = i-1;
if(i%2 == 0){
even = arr[i];
odd = arr[i-1];
}else{
even = arr[i-1];
odd = arr[i];
}
}
}
return Math.max(max_len,arr.length - start);
}
It's like a sliding window problem.
We keep track of even and odd equality with 2 variables, even and odd.
Whenever we come across a unmet condition, like index even but not equal with even variable and same goes for odd, we first
Record the length till now in max_len.
Reset start to i-1 as this is need incase of all elements equal.
Reset even and odd according to current index i to arr[i] and arr[i-1] respectively.
Demo: https://ideone.com/iUQti7
I didn't analyse the time complexity, just wrote a solution that uses recursion and it works (I think):
public class Main
{
public static int switching(int[] arr, int index, int end)
{
try
{
if (arr[index] == arr[index+2])
{
end = index+2;
return switching(arr, index+1, end);
}
} catch (Exception e) {}
return end;
}
public static void main(String[] args)
{
//int[] arr = {3,2,3,2,3};
//int[] arr = {3,2,3};
//int[] arr = {4,4,4};
int[] arr = {1,2,3,4,5,4,4,7,9,8,10};
int best = -1;
for (int i = 0; i < arr.length; i++)
best = Math.max(best, (switching(arr, i, 0) - i));
System.out.println(best+1); // It returns, in this example, 3
}
}
int switchingSubarray(vector<int> &arr, int n) {
if(n==1||n==2) return n;
int i=0;
int ans=2;
int j=2;
while(j<n)
{
if(arr[j]==arr[j-2]) j++;
else
{
ans=max(ans,j-i);
i=j-1;
j++;
}
}
ans=max(ans,j-i);
return ans;
}
Just using sliding window technique to solve this problems as element at j and j-2 need to be same.
Try to dry run on paper u will surely get it .
# Switching if numbers in even positions equal to odd positions find length of longest switch in continuos sub array
def check(t):
even = []
odd = []
i = 0
while i < len(t):
if i % 2 == 0:
even.append(t[i])
else:
odd.append(t[i])
i += 1
if len(set(even)) == 1 and len(set(odd)) == 1:
return True
else:
return False
def solution(A):
maxval = 0
if len(A) == 1:
return 1
for i in range(0, len(A)):
for j in range(0, len(A)):
if check(A[i:j+1]) == True:
val = len(A[i:j+1])
print(A[i:j+1])
if val > maxval:
maxval = val
return maxval
A = [3,2,3,2,3]
A = [7,4,-2,4,-2,-9]
A=[4]
A = [7,-5,-5,-5,7,-1,7]
print(solution(A))

Transform an array to another array by shifting value to adjacent element

I am given 2 arrays, Input and Output Array. The goal is to transform the input array to output array by performing shifting of 1 value in a given step to its adjacent element. Eg: Input array is [0,0,8,0,0] and Output array is [2,0,4,0,2]. Here 1st step would be [0,1,7,0,0] and 2nd step would be [0,1,6,1,0] and so on.
What can be the algorithm to do this efficiently? I was thinking of performing BFS but then we have to do BFS from each element and this can be exponential. Can anyone suggest solution for this problem?
I think you can do this simply by scanning in each direction tracking the cumulative value (in that direction) in the current array and the desired output array and pushing values along ahead of you as necessary:
scan from the left looking for first cell where
cumulative value > cumulative value in desired output
while that holds move 1 from that cell to the next cell to the right
scan from the right looking for first cell where
cumulative value > cumulative value in desired output
while that holds move 1 from that cell to the next cell to the left
For your example the steps would be:
FWD:
[0,0,8,0,0]
[0,0,7,1,0]
[0,0,6,2,0]
[0,0,6,1,1]
[0,0,6,0,2]
REV:
[0,1,5,0,2]
[0,2,4,0,2]
[1,1,4,0,2]
[2,0,4,0,2]
i think BFS could actually work.
notice that n*O(n+m) = O(n^2+nm) and therefore not exponential.
also you could use: Floyd-Warshall algorithm and Johnson’s algorithm, with a weight of 1 for a "flat" graph, or even connect the vertices in a new way by their actual distance and potentially save some iterations.
hope it helped :)
void transform(int[] in, int[] out, int size)
{
int[] state = in.clone();
report(state);
while (true)
{
int minPressure = 0;
int indexOfMinPressure = 0;
int maxPressure = 0;
int indexOfMaxPressure = 0;
int pressureSum = 0;
for (int index = 0; index < size - 1; ++index)
{
int lhsDiff = state[index] - out[index];
int rhsDiff = state[index + 1] - out[index + 1];
int pressure = lhsDiff - rhsDiff;
if (pressure < minPressure)
{
minPressure = pressure;
indexOfMinPressure = index;
}
if (pressure > maxPressure)
{
maxPressure = pressure;
indexOfMaxPressure = index;
}
pressureSum += pressure;
}
if (minPressure == 0 && maxPressure == 0)
{
break;
}
boolean shiftLeft;
if (Math.abs(minPressure) > Math.abs(maxPressure))
{
shiftLeft = true;
}
else if (Math.abs(minPressure) < Math.abs(maxPressure))
{
shiftLeft = false;
}
else
{
shiftLeft = (pressureSum < 0);
}
if (shiftLeft)
{
++state[indexOfMinPressure];
--state[indexOfMinPressure + 1];
}
else
{
--state[indexOfMaxPressure];
++state[indexOfMaxPressure + 1];
}
report(state);
}
}
A simple greedy algorithm will work and do the job in minimum number of steps. The function returns the total numbers of steps required for the task.
int shift(std::vector<int>& a,std::vector<int>& b){
int n = a.size();
int sum1=0,sum2=0;
for (int i = 0; i < n; ++i){
sum1+=a[i];
sum2+=b[i];
}
if (sum1!=sum2)
{
return -1;
}
int operations=0;
int j=0;
for (int i = 0; i < n;)
{
if (a[i]<b[i])
{
while(j<n and a[j]==0){
j++;
}
if(a[j]<b[i]-a[i]){
operations+=(j-i)*a[j];
a[i]+=a[j];
a[j]=0;
}else{
operations+=(j-i)*(b[i]-a[i]);
a[j]-=(b[i]-a[i]);
a[i]=b[i];
}
}else if (a[i]>b[i])
{
a[i+1]+=(a[i]-b[i]);
operations+=(a[i]-b[i]);
a[i]=b[i];
}else{
i++;
}
}
return operations;
}
Here -1 is a special value meaning that given array cannot be converted to desired one.
Time Complexity: O(n).

Determine whether or not there exist two elements in an array whose sum is exactly X?

Given an array A[] of N elements and a number x, check for pair in A[] with sum as x ?
Method 1 = Sorting which gives O(n lg n).
Method 2 = Using hash table which gives O(n) .
I am having a doubt in method 2, that what if chaining is used , then for every element we have to search in list for its complement , which can yield O(n^2) in worst case because of chaining .
I think it will work only when range of integers is given , so that we can have hashtable without chaining which gives O(n) . Am i right ?
You can try the following approach ->
hash all elements in A[], like (key, value) = (A[i],true)
for all elements in A[]:
if hash(x-A[i])=true: it exists
You are right about hashtable that O(n) is not the WORST CASE guaranteed complexity.
However, with a reasonable hash function, the worst case should rarely happen.
And of course, if a small enough upper bound is given on the range of numbers, you can just use normal array to do the trick.
O(N) solution which uses hashmap to maintain the element Vs its frequency. Frequency is maintained so as to make it work for duplicate array elements case.
public static boolean countDiffPairsUsingHashing(int[] nums, int target) {
if (nums != null && nums.length > 0) {
HashMap<Integer, Integer> numVsFreq = new HashMap<Integer, Integer>();
for (int i = 0; i < nums.length; i++) {
numVsFreq.put(nums[i], numVsFreq.getOrDefault(nums[i], 0) + 1);
}
for (int i = 0; i < nums.length; i++) {
int diff = target - nums[i];
numVsFreq.put(nums[i], numVsFreq.get(nums[i]) - 1);
if (numVsFreq.get(diff) != null && numVsFreq.get(diff) > 0) {
return true;
}
numVsFreq.put(nums[i], numVsFreq.get(nums[i]) + 1);
}
}
return false;
}

given an array, for each element, find out the total number of elements lesser than it, which appear to the right of it

I had previously posted a question, Given an array, find out the next smaller element for each element
now, i was trying to know , if there is any way to find out "given an array, for each element, find out the total number of elements lesser than it, which appear to the right of it"
for example, the array [4 2 1 5 3] should yield [3 1 0 1 0]??
[EDIT]
I have worked out a solution, please have a look at it, and let me know if there is any mistake.
1 Make a balanced BST inserting elements traversing the array from right to left
2 The BST is made in such a way that each element holds the size of the tree rooted at that element
3 Now while you search for the right position to insert any element, take account of the total size of the subtree rooted at left sibling + 1(for parent) if you move right
Now since, the count is being calculated at the time of insertion of an element, and that we are moving from right to left, we get the exact count of elements lesser than the given element appearing after it.
It can be solved in O(n log n).
If in a BST you store the number of elements of the subtree rooted at that node when you search the node (reaching that from the root) you can count number of elements larger/smaller than that in the path:
int count_larger(node *T, int key, int current_larger){
if (*T == nil)
return -1;
if (T->key == key)
return current_larger + (T->right_child->size);
if (T->key > key)
return count_larger(T->left_child, key, current_larger + (T->right_child->size) + 1);
return count_larger(T->right_child, key, current_larger)
}
** for example if this is our tree and we're searching for key 3, count_larger will be called for:
-> (node 2, 3, 0)
--> (node 4, 3, 0)
---> (node 3, 3, 2)
and the final answer would be 2 as expected.
Suppose the Array is 6,-1,5,10,12,4,1,3,7,50
Steps
1.We start building a BST from right end of the array.Since we are concerned with all the elements to right for any element.
2.Suppose we have formed the partial solution tree upto the 10.
3.Now when inserting 5 we do a tree traversal and insert to the right of 4.
Notice that each time we traverse to the right of any node we increment by 1 and add the no. of elements in left subtree of that node.
eg:
for 50 it is 0
for 7 it is 0
for 12 it is 1 right traversel + leftsubtree size of 7 = 1+3 =4
for 10 same as above.
for 4 it is 1+1 =2
While building bst we can easily maintain the left subtree size for each node by simply maintaining a variable corresponding to it and incrementing it by 1 each time a node traverses to the left by it.
Hence the Solution Average case O(nlogn).
We can use other optimizations such as predetermining whether array is sorted in decreasing order
find groups of element in decreasing order treat them as single.
I think is it possible to do it in O(nlog(n))with a modified version of quicksort. Basically each time you add an element to less, you check if this element rank in the original array was superior to the rank of the current pivot. It may look like
oldrank -> original positions
count -> what you want
function quicksort('array')
if length('array') ≤ 1
return 'array' // an array of zero or one elements is already sorted
select and remove a pivot value 'pivot' from 'array'
create empty lists 'less' and 'greater'
for each 'x' in 'array'
if 'x' ≤ 'pivot'
append 'x' to 'less'
if oldrank(x) > = oldrank(pivot) increment count(pivot)
else
append 'x' to 'greater'
if oldrank(x) < oldrank(pivot) increment count(x) //This was missing
return concatenate(quicksort('less'), 'pivot', quicksort('greater')) // two recursive calls
EDIT:
Actually it can be done using any comparison based sorting algorithm . Every time you compare two elements such that the relative ordering between the two will change, you increment the counter of the bigger element.
Original pseudo-code in wikipedia.
You can also use binary Index tree
int tree[1000005];
void update(int idx,int val)
{
while(idx<=1000000)
{
tree[idx]+=val;
idx+=(idx & -idx);
}
}
int sum(int idx)
{
int sm=0;
while(idx>0)
{
sm+=tree[idx];
idx-=(idx & -idx);
}
return sm;
}
int main()
{
int a[]={4,2,1,5,3};
int s=0,sz=6;
int b[10];
b[sz-1]=0;
for(int i=sz-2;i>=0;i--)
{
if(a[i]!=0)
{
update(a[i],1);
b[i]=sum(a[i]-1)+s;
}
else s++;
}
for(int i=0;i<sz-1;i++)
{
cout<<b[i]<<" ";
}
return 0;
}
//some array called newarray
for(int x=0; x <=array.length;x++)
{
for(int y=x;y<array.length;y++)
{
if(array[y] < array[x])
{
newarray[x] = newarray[x]+1;
}
}
}
something like this,where array is your input array and newarray your output array
make sure to initialize everything correctly(0 for the newarrays values)
Another approach without using the tree.
Construct another sorted array . For example for input array {12, 1, 2, 3, 0, 11, 4} it will be {0, 1, 2, 3, 4, 11, 12}
Now compare position of each element from input array with sorted array.For example 12 in first array is at 0 index while sorted array it’s as 6
Once comparison is done, remove element from both array
Other than using BST, we can also solve this problem optimally by doing some modification in merge sort algorithm (in O(n*logn) time).
If you observe this problem more carefully, you can say that in the problem we need to count the number of inversions required for each element to make the array sorted in ascending order, right?
So this problem can be solved using Divide and Conquer paradigm. Here you need to maintain an auxiliary array for storing the count of inversions required (i.e. elements smaller than it on the right side of it).
Below is a python program:
def mergeList(arr, pos, res, start, mid, end):
temp = [0]*len(arr)
for i in range(start, end+1):
temp[i] = pos[i]
cur = start
leftcur = start
rightcur = mid + 1
while leftcur <= mid and rightcur <= end:
if arr[temp[leftcur]] <= arr[temp[rightcur]]:
pos[cur] = temp[leftcur]
res[pos[cur]] += rightcur - mid - 1
leftcur += 1
cur += 1
else:
pos[cur] = temp[rightcur]
cur += 1
rightcur += 1
while leftcur <= mid:
pos[cur] = temp[leftcur]
res[pos[cur]] += end - mid
cur += 1
leftcur += 1
while rightcur <= end:
pos[cur] = temp[rightcur]
cur += 1
rightcur += 1
def mergeSort(arr, pos, res, start, end):
if start < end:
mid = (start + end)/2
mergeSort(arr, pos, res, start, mid)
mergeSort(arr, pos, res, mid+1, end)
mergeList(arr, pos, res, start, mid, end)
def printResult(arr, res):
print
for i in range(0, len(arr)):
print arr[i], '->', res[i]
if __name__ == '__main__':
inp = input('enter elements separated by ,\n')
inp = list(inp)
res = [0]*len(inp)
pos = [ind for ind, v in enumerate(inp)]
mergeSort(inp, pos, res, 0, len(inp)-1)
printResult(inp, res)
Time : O(n*logn)
Space: O(n)
You can also use an array instead of a binary search tree.
def count_next_smaller_elements(xs):
# prepare list "ys" containing item's numeric order
ys = sorted((x,i) for i,x in enumerate(xs))
zs = [0] * len(ys)
for i in range(1, len(ys)):
zs[ys[i][1]] = zs[ys[i-1][1]]
if ys[i][0] != ys[i-1][0]: zs[ys[i][1]] += 1
# use list "ts" as binary search tree, every element keeps count of
# number of children with value less than the current element's value
ts = [0] * (zs[ys[-1][1]]+1)
us = [0] * len(xs)
for i in range(len(xs)-1,-1,-1):
x = zs[i]+1
while True:
us[i] += ts[x-1]
x -= (x & (-x))
if x <= 0: break
x = zs[i]+1
while True:
x += (x & (-x))
if x > len(ts): break
ts[x-1] += 1
return us
print count_next_smaller_elements([40, 20, 10, 50, 20, 40, 30])
# outputs: [4, 1, 0, 2, 0, 1, 0]
Instead of BST, you can use stl map.
Start inserting from right.
After inserting an element, find its iterator:
auto i = m.find(element);
Then subtract it from m.end(). That gives you the number of elements in map which are greater than current element.
map<int, bool> m;
for (int i = array.size() - 1; i >= 0; --i) {
m[array[i]] = true;
auto iter = m.find(array[i])
greaterThan[i] = m.end() - iter;
}
Hope it helped.
Modified Merge sort: (Already tested code)
Takes O(nlogn) time.
public class MergeSort {
static HashMap<Integer, Integer> valueToLowerCount = new HashMap<Integer, Integer>();
public static void main(String[] args) {
int [] arr = new int[] {50, 33, 37, 26, 58, 36, 59};
int [] lowerValuesOnRight = new int[] {4, 1, 2, 0, 1, 0, 0};
HashMap<Integer, Integer> expectedLowerCounts = new HashMap<Integer, Integer>();
idx = 0;
for (int x: arr) {
expectedLowerCounts.put(x, lowerValuesOnRight[idx++]);
}
for (int x : arr) valueToLowerCount.put(x, 0);
mergeSort(arr, 0, arr.length-1);
//Testing
Assert.assertEquals("Count lower values on right side", expectedLowerCounts, valueToLowerCount);
}
public static void mergeSort(int []arr, int l, int r) {
if (r <= l) return;
int mid = (l+r)/2;
mergeSort(arr, l, mid);
mergeSort(arr, mid+1, r);
mergeDecreasingOrder(arr, l, mid, r);
}
public static void mergeDecreasingOrder(int []arr, int l, int lr, int r) {
int []leftArr = Arrays.copyOfRange(arr, l, lr+1);
int []rightArr = Arrays.copyOfRange(arr, lr+1, r+1);
int indexArr = l;
int i = 0, j = 0;
while (i < leftArr.length && j < rightArr.length) {
if (leftArr[i] > rightArr[j]) {
valueToLowerCount.put(leftArr[i], valueToLowerCount.get(leftArr[i]) + rightArr.length - j);
arr[indexArr++] = leftArr[i++];
}else {
arr[indexArr++] = rightArr[j++];
}
}
while (i < leftArr.length) {
arr[indexArr++] = leftArr[i++];
}
while (j < rightArr.length) {
arr[indexArr++] = rightArr[j++];
}
}
}
To find the total number of values on right-side which are greater than an array element, simply change single line of code:
if (leftArr[i] > rightArr[j])
to
if (leftArr[i] < rightArr[j])

Resources