Algorithm Recursion Sorting - arrays

I have an algorithm :
ALGORITHM F_min1(A[0..n-1])
//Input: An array A[0..n-1] of real numbers
If n = 1
return A[0]
else
temp ← F_minl(A[0..n-2])
If temp ≤ A[n-1]
return temp
else
return A[n-1]
I suspect it sorts the array, but I don't know how exactly. I think it looks at the minimum element of the array except the last, compares them, and prints the greater one.

Almost; it returns the lesser, not the greater.
Also, it doesn't sort the array at all: it merely returns the smallest element.
In words, this reads something like:
If there is only one element, it must be the right one -- return it.
Otherwise,
recur: find the smallest element in all but the last one.
compare the last element to that minimum; return the smaller one.
When you return all the way to the initial call, you have the smallest element of the array.

Related

splitting an array and find maximal |max (L) -max (R)|

I have a general question in programming.
Suppose I have an array, I need to find the index K that divides the array into two parts L, R so that the value
|max (L) -max (R)| Is maximal.
max(L) is the highest number in the L part
K points to the first member in R
This seems to be a problem that reduces to only 2 viable candidates for a solution: either K splits off the first value from the rest, or the last value from the rest, giving you a small part of just one value, and a large part with the remaining values, including the maximum value.
Suppose the maximum value in the array can be found at index M, then one of the two parts will have that value and it will be Max(Part). The other part should have a maximum value that is as small as possible. Consequently that part should be reduced to just one value: adding one more value to that part could never decrease its maximum value.
If the overall maximum value is at one of the ends of the array, then there is no choice, and the small part will be chopped off the array at the other end of it.
When the overall maximum value is not at an end of the array, there are two possibilities: choose the one where the chopped off value will be the lowest. In other words, K will be either 1 or n-1 (in zero-based indexing), and this can be determined in constant time, i.e. O(1).
Actually to solve this question we can do it in constant time.
1.Since the list must be divided in two either list A or list B will contain the leftmost or rightmost element.
Adding values to our list can only increase the maximum element of a list, so it is never desirable to have a list of size larger than 1
So all we need to do is look at the head and tail, take the smallest A, and make the rest of the list B
For example consider 6,7,7,3,2,6,4
A = [4], (smallest head/tail), B = [6,7,7,3,2,6]
You can solve it in O(n) with some preparation:
Make two arrays, maxL[] and maxR[] equal in size to the original array
Walk the original array starting from the left, setting maxL[i] to the max value so far
Walk the original array again starting from the right, setting maxR[i] to the max value so far
Now walk both maxL[] and maxR[] in any direction, looking for k such that the value of ABS(maxL[k] - maxR[k]) is maximized; return k.

Given an array, find out the last smaller element for each element

Given an array find the last smaller element's index in array for each element.
For example, suppose the given array is {4,2,1,5,3}. Then last smaller element for each element will be as follows.
4->3
2->1
1->Null
5->3
3->Null
Notice for 1st pair 4->3, 3 is the last element in array smaller than 4.
The resultant/output array would have indexes not the elements themselves. Result would be {4,2,-1,4,-1}
I was asked this question in an interview, but i couldn't think of a solution better than the trivial O(n^2) solution.
Any help would be highly appreciated.
We need to compute max(index) over all elements with smaller values.
Let's sort pairs (element, index) in lexicographical order and iterate over them keeping track of the largest index seen so far. That's exactly the position of the rightmost smaller element. Here's how one could implement it:
def get_right_smaller(xs):
res = [-1] * len(xs)
right_index = -1
for val, idx in sorted((val, idx) for idx, val in enumerate(xs)):
res[idx] = right_index if right_index > idx else -1
right_index = max(right_index, idx)
return res
This solution works properly even if the input array contains equal numbers because the element with smaller index goes earlier if the the values of two elements are the same.
The time complexity of this solution is O(N log N + N) = O(N log N) (it does sorting and one linear pass).
If all elements of the array are O(N), you can make this solution linear using count sort.
Make a list, add last element index.
Walk through array right to left.
For every element:
if list tail value is smaller then current element
find the most first smaller list element (binary search, list is sorted)
otherwise
add element index to the list tail, output -1
for {4,2,1,5,3,6,2} example list will contain index 6 (value 2); index 2 (value 1)

Displacement of unsorted shifted array

We have one unsorted array with distinct entries a_1, a_2, ... a_n, and we also know a shifted array a_(n-k), ...a_n, a_1, a_2, ... The goal is to find the displacement k given these two arrays. Of course there is a worst case linear algorithm O(n). But can we do better than this?
There is a hint that the answer has something to do with the k distribution. If k is distributed uniformly between 0 and n, then we have to do it within O(n). If k is distributed in otherway there might be some better way.
If there are no duplicates in the array (distinct entries) I would do this with a while loop and incrementing an index value k starting from 0 and comparing two items at once one from the beginning and one from the end. Such as array1[k] === array2[0] or array1[n-k] === array[0] and the index value k should be the displacement once the above comparison returns true.
There is an O(sqrt(n)) solution, as the op figured out based on #greybeard's hint.
From the first list, hash the first sqrt(n) elements. For the second list, look at the elements advancing by sqrt(n) elements at each time.
However, we might ask if there is a solution that might be close to O(k) (or less!) if k is small and n is large. In fact, I claim there is an O(sqrt(k)) solution.
For that, I propose an incremental process of increasing the step size. So the algorithm looks like this:
First, grab 2 elements from the first list - hash those values (and keep position of values as lookup value, so this should be thought of as a HashMap with key being elements of the list and values being positions).
Compare those elements with the first and third element from the second list.
Hash the values from the second list as well.
Next, look at the third element from the first list - hashing the value. In the process, see if it matches either of the elements found in the second list. Next, advance 3 elements in the second list, and compare its value - remember that values as well.
Continue like this:
increase the prefix length from the first list, and at each point, increasing the step size of the second list. Whenever you grab a new element for the first list, you have to compare it with values in the second list, but that's fine because it does not significantly affect performance.
Notice that when your prefix length is p, you have already checked the first p*(p+1)/2 elements in the second list. So for a given value of k, this process will require that prefix length p is approximately sqrt(2k), which is O(sqrt(k)) as required.
Basically, if we know that a[0] does not equal b[0], we do not need to check if a[1] equals b[1]. Extending this idea and hashing the a's, checks can go as follows:
a[0] == b[0] or b[0] in hash? => known k's: 0
a[1] == b[2] or b[2] in hash? => known k's: 0,1,2
a[2] == b[5] or b[5] in hash? => known k's: 0,1,2,3,4,5
a[3] == b[9] or b[9] in hash? => known k's: 0,1,2,3,4,5,6,7,8,9
a[4] == b[14] or b[14] in hash? => known k's: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14
...
(I think that's O(sqrt n) time and space worst case complexity.)
maybe if you incorporate them into a hashtable. then the access and compare time for a(n-k) in the original array will be O(1).

Greatest element present on the right side of every element in an array

I have been given an array (of n elements) and i have to find the smallest element on the right side of each element which is greater than itself(current element).
For example :
Array = {8,20,9,6,15,31}
Output Array = {9,31,15,15,31,-1}
Is it possible to solve this in O(n).? I thought of traversing the array from the right side (starting from n-2) and building a balance binary search tree for the remaining elements, as searching in it for an element which is immediately greater than the current element would be O(logn) .
Hence time complexity would come out to be O(n*(log(n)).
Is there a better approach to this problem?
The problem you present is impossible to solve in O(n) time, since you can reduce sorting to it and thereby achieve sorting in O(n) time.
Say there exists an algorithm which solves the problem in O(n).
Let there be an element a.
The algorithm can also be used to find the smallest element to the left of and larger than a (by reversing the array before running the algorithm).
It can also be used to find the largest element to the right (or left) of and smaller than a (by negating the elements before running the algorithm).
So, after running the algorithm four times (in linear time), you know which elements should be to the right and to the left of each element. In order to construct the sorted array in linear time, you'd need to keep the indices of the elements instead of the values. You first find the smallest element by following your "larger-than pointers" in linear time, and then make another pass in the other direction to actually build the array.
Others have proved that it is impossible in general to solve in O(n).
However, it is possible to do in O(m) where m is the size of your largest element.
This means that in certain cases (e.g. if if your input array is known to be a permutation of the integers 1 up to n) then it is possible to do in O(n).
The code below shows the approach, built upon a standard method for computing the next greater element. (There is a good explanation of this method on geeks for geeks)
def next_greater_element(A):
"""Return an array of indices to the next strictly greater element, -1 if none exists"""
i=0
NGE=[-1]*len(A)
stack=[]
while i<len(A)-1:
stack.append(i)
while stack and A[stack[-1]]<A[i+1]:
x=stack.pop()
NGE[x]=i+1
i+=1
return NGE
def smallest_greater_element(A):
"""Return an array of smallest element on right side of each element"""
top = max(A) + 1
M = [-1] * top # M will contain the index of each element sorted by rank
for i,a in enumerate(A):
M[a] = i
N = next_greater_element(M) # N contains an index to the next element with higher value (-1 if none)
return [N[a] for a in A]
A=[8,20,9,6,15,31]
print smallest_greater_element(A)
The idea is to find the next element in size order with greater index. This next element will therefore be the smallest one appearing to the right.
This cannot be done in O(n), since we can reduce Element Distinctness Problem (which is known to be sovleable in Omega(nlogn) when comparisons based) to it.
First, let's do a little expansion to the problem, that does not influence its hardness:
I have been given an array (of n elements) and i have to find the
smallest element on the right side of each element which is greater/equals
than itself(current element).
The addition is we allow the element to be equal to it (and to the right), and not only strictly greater than1.
Now, Given an instance of element distinctness arr, run the algorithm for this problem, and look if there is any element i such that arr[i] == res[i], if there isn't answer "all distinct", otherwise: "not all distinct".
However, since Element Distinctness is Omega(nlogn) comparisons based, it makes this problem such as well.
(1)
One possible justification why adding equality is not making the problem more difficult is - assuming elements are integers, we can just add i/(n+1) to each element in the array, now for each two elements if arr[i] < arr[j], also arr[i] + i/(n+1) < arr[j] + j/(n+1), but if arr[i] = arr[j], then if i<j arr[i] + i/(n+1) < arr[j] + j/(n+1), and we can have the same algorithm solve the problem for equalities as well.

Largest 3 numbers c language [duplicate]

I have an array
A[4]={4,5,9,1}
I need it would give the first 3 top elements like 9,5,4
I know how to find the max element but how to find the 2nd and 3rd max?
i.e if
max=A[0]
for(i=1;i<4;i++)
{
if (A[i]>max)
{
max=A[i];
location=i+1;
}
}
actually sorting will not be suitable for my application because,
the position number is also important for me i.e. I have to know in which positions the first 3 maximum is occurring, here it is in 0th,1th and 2nd position...so I am thinking of a logic
that after getting the max value if I could put 0 at that location and could apply the same steps for that new array i.e.{4,5,0,1}
But I am bit confused how to put my logic in code
Consider using the technique employed in the Python standard library. It uses an underlying heap data structure:
def nlargest(n, iterable):
"""Find the n largest elements in a dataset.
Equivalent to: sorted(iterable, reverse=True)[:n]
"""
if n < 0:
return []
it = iter(iterable)
result = list(islice(it, n))
if not result:
return result
heapify(result)
for elem in it:
heappushpop(result, elem)
result.sort(reverse=True)
return result
The steps are:
Make an n length fixed array to hold the results.
Populate the array with the first n elements of the input.
Transform the array into a minheap.
Loop over remaining inputs, replacing the top element of the heap if new data element is larger.
If needed, sort the final n elements.
The heap approach is memory efficient (not requiring more memory than the target output) and typically has a very low number of comparisons (see this comparative analysis).
You can use the selection algorithm
Also to mention that the complexity will be O(n) ie, O(n) for selection and O(n) for iterating, so the total is also O(n)
What your essentially asking is equivalent to sorting your array in descending order. The fastest way to do this is using heapsort or quicksort depending on the size of your array.
Once your array is sorted your largest number will be at index 0, your second largest will be at index 1, ...., in general your nth largest will be at index n-1
you can follw this procedure,
1. Add the n elements to another array B[n];
2. Sort the array B[n]
3. Then for each element in A[n...m] check,
A[k]>B[0]
if so then number A[k] is among n large elements so,
search for proper position for A[k] in B[n] and replace and move the numbers on left in B[n] so that B[n] contains n large elements.
4. Repeat this for all elements in A[m].
At the end B[n] will have the n largest elements.

Resources