Find The Minimum Steps to Sort An array - arrays

We are giving A array of size N , In one step i can take a element from position p and place it before and after some other element.
For Ex:
A = {3,1,2}
I take three and place it before 2 so array becomes A={1,2,3}
I need to find the minimum steps needed to sort and array in ascending or descending order
My Approach
Find the number of Inversion that's the minimum steps needed to sort an array.
Sudo Code
for i 1 to N:
Count = Number of Element greater than A[i] from 1 to i
if(Count>1) steps++
Update(A[i])
Similary from Descending
for i N to 1:
Count = Number of Element smaller than A[i] from i to N
if(Count>1) steps++
Update(A[i])
Takes the minimum of both , I can use segment tree for counting element, So overall Complexity O(N*logN)
Problem
Is my approach is right ? Because i only putting the elements in only in one direction , in problem both direction is allowed (Before and After).
It will gives me correct Minimum Steps ?

It has nothing to do with inversion.
Let's look at what remains (that is, the elements that were never moved). It's an increasing subsequence. We can also place all other elements wherever we want. Thus, the answer is n minus the length of the longest increasing subsequence in the array (for ascending order).
Your approach doesn't work even on your example. If the array is {3, 1, 2}, it would print 0. The correct answer is 1.

Related

Given an array A1, A2 ... AN and K, count how many subarrays have inversion count greater than or equal to K

Q) Given an array A1, A2 ... AN and K count how many subarrays have inversion count greater than or equal to K.
N <= 10^5
K <= N*(N-1)/2
So, this question I came across in an interview. I came up with the naive solution of forming all subarrays with two for loops (O(n^2) ) and counting inversions in the array using modified merge sort which is O(nlogn). This leads to a complexity of O(n^3logn) which I guess can be improved. Any leads how I can improve it? Thanks!
You can solve it in O(nlogn) if I'm not wrong, using two moving pointers.
Start with the left pointer in the first element and move the right pointer until you have a subarray with >= K inversions. To do that, you can use any balanced binary search tree and every time you move the pointer to the right, count how many elements bigger than this one are already in the tree. Then you insert the element in the tree too.
When you hit the point in which you already have >= K inversions, you know that every longer subarray with the same starting element also satisfies the restriction, so you can add them all.
Then move the left pointer one position to the right and subtract the inversions of it (again, look in the tree for elements smaller than it). Now you can do the same as before again.
An amortized analysis easily shows that this is O(nlogn), as the two pointers only traverse once the array and each operation in the tree is O(logn).

Probability, expected number

In an unsorted array, an element is a local maximum if it is larger than
both of the two adjacent elements. The first and last elements of the array are considered local
maxima if they are larger than the only adjacent element. If we create an array by randomly
permuting the numbers from 1 to n, what is the expected number of local maxima? Prove
your answer correct using additivity of expectations.
Im stuck with this question, i have no clue how to solve this...
You've got an unsorted Array array with n elements. You've got two possible positions for where the local maxima could be. The local maxima could be either on the end or between the first and last element.
Case 1:
If you're looking at the element in either the first or last index (array[0] or array[n-1]) What's the probability that the element is a local maxima? In other words what's the probability that the value of that element will be greater than the element to its right? There are 10 possible value each index could hold {0,1,2,3,4,5,6,7,8,9}. Therefore a 50% chance that on average the element in the first index will be greater than the element in the second index. (array[0] > array[1])
Case 2:
If you're looking at any element that ISNT the first or last element of the array, (n-2 elements) then what's the probability that each one will be the local max? Similarly to the first case, we know there are 10 possible values each index could hold, therefore a 1/3 chance that on average, the element we choose will be greater than the one before it and greater than the one after it.
Putting it all together:
There are 2 cases that have a 1/2 probability of being local maxima and there are n-2 cases that have a 1/3 probability of being local maxima. (2 + n-2 = n, all possible cases). (2)(1/2) + (n-2)(1/3) = (1+n)/(3).
Solvable of course, but won't deprive you the fun of doing it yourself. I will give you a tip. Consider this sketch. What do you think it represents? If you figure this out, you will know that a pattern is available to discover for any n, odd and even. Good luck. If still stuck, will tip you more.

Greatest element present on the right side of every element in an array

I have been given an array (of n elements) and i have to find the smallest element on the right side of each element which is greater than itself(current element).
For example :
Array = {8,20,9,6,15,31}
Output Array = {9,31,15,15,31,-1}
Is it possible to solve this in O(n).? I thought of traversing the array from the right side (starting from n-2) and building a balance binary search tree for the remaining elements, as searching in it for an element which is immediately greater than the current element would be O(logn) .
Hence time complexity would come out to be O(n*(log(n)).
Is there a better approach to this problem?
The problem you present is impossible to solve in O(n) time, since you can reduce sorting to it and thereby achieve sorting in O(n) time.
Say there exists an algorithm which solves the problem in O(n).
Let there be an element a.
The algorithm can also be used to find the smallest element to the left of and larger than a (by reversing the array before running the algorithm).
It can also be used to find the largest element to the right (or left) of and smaller than a (by negating the elements before running the algorithm).
So, after running the algorithm four times (in linear time), you know which elements should be to the right and to the left of each element. In order to construct the sorted array in linear time, you'd need to keep the indices of the elements instead of the values. You first find the smallest element by following your "larger-than pointers" in linear time, and then make another pass in the other direction to actually build the array.
Others have proved that it is impossible in general to solve in O(n).
However, it is possible to do in O(m) where m is the size of your largest element.
This means that in certain cases (e.g. if if your input array is known to be a permutation of the integers 1 up to n) then it is possible to do in O(n).
The code below shows the approach, built upon a standard method for computing the next greater element. (There is a good explanation of this method on geeks for geeks)
def next_greater_element(A):
"""Return an array of indices to the next strictly greater element, -1 if none exists"""
i=0
NGE=[-1]*len(A)
stack=[]
while i<len(A)-1:
stack.append(i)
while stack and A[stack[-1]]<A[i+1]:
x=stack.pop()
NGE[x]=i+1
i+=1
return NGE
def smallest_greater_element(A):
"""Return an array of smallest element on right side of each element"""
top = max(A) + 1
M = [-1] * top # M will contain the index of each element sorted by rank
for i,a in enumerate(A):
M[a] = i
N = next_greater_element(M) # N contains an index to the next element with higher value (-1 if none)
return [N[a] for a in A]
A=[8,20,9,6,15,31]
print smallest_greater_element(A)
The idea is to find the next element in size order with greater index. This next element will therefore be the smallest one appearing to the right.
This cannot be done in O(n), since we can reduce Element Distinctness Problem (which is known to be sovleable in Omega(nlogn) when comparisons based) to it.
First, let's do a little expansion to the problem, that does not influence its hardness:
I have been given an array (of n elements) and i have to find the
smallest element on the right side of each element which is greater/equals
than itself(current element).
The addition is we allow the element to be equal to it (and to the right), and not only strictly greater than1.
Now, Given an instance of element distinctness arr, run the algorithm for this problem, and look if there is any element i such that arr[i] == res[i], if there isn't answer "all distinct", otherwise: "not all distinct".
However, since Element Distinctness is Omega(nlogn) comparisons based, it makes this problem such as well.
(1)
One possible justification why adding equality is not making the problem more difficult is - assuming elements are integers, we can just add i/(n+1) to each element in the array, now for each two elements if arr[i] < arr[j], also arr[i] + i/(n+1) < arr[j] + j/(n+1), but if arr[i] = arr[j], then if i<j arr[i] + i/(n+1) < arr[j] + j/(n+1), and we can have the same algorithm solve the problem for equalities as well.

Largest 3 numbers c language [duplicate]

I have an array
A[4]={4,5,9,1}
I need it would give the first 3 top elements like 9,5,4
I know how to find the max element but how to find the 2nd and 3rd max?
i.e if
max=A[0]
for(i=1;i<4;i++)
{
if (A[i]>max)
{
max=A[i];
location=i+1;
}
}
actually sorting will not be suitable for my application because,
the position number is also important for me i.e. I have to know in which positions the first 3 maximum is occurring, here it is in 0th,1th and 2nd position...so I am thinking of a logic
that after getting the max value if I could put 0 at that location and could apply the same steps for that new array i.e.{4,5,0,1}
But I am bit confused how to put my logic in code
Consider using the technique employed in the Python standard library. It uses an underlying heap data structure:
def nlargest(n, iterable):
"""Find the n largest elements in a dataset.
Equivalent to: sorted(iterable, reverse=True)[:n]
"""
if n < 0:
return []
it = iter(iterable)
result = list(islice(it, n))
if not result:
return result
heapify(result)
for elem in it:
heappushpop(result, elem)
result.sort(reverse=True)
return result
The steps are:
Make an n length fixed array to hold the results.
Populate the array with the first n elements of the input.
Transform the array into a minheap.
Loop over remaining inputs, replacing the top element of the heap if new data element is larger.
If needed, sort the final n elements.
The heap approach is memory efficient (not requiring more memory than the target output) and typically has a very low number of comparisons (see this comparative analysis).
You can use the selection algorithm
Also to mention that the complexity will be O(n) ie, O(n) for selection and O(n) for iterating, so the total is also O(n)
What your essentially asking is equivalent to sorting your array in descending order. The fastest way to do this is using heapsort or quicksort depending on the size of your array.
Once your array is sorted your largest number will be at index 0, your second largest will be at index 1, ...., in general your nth largest will be at index n-1
you can follw this procedure,
1. Add the n elements to another array B[n];
2. Sort the array B[n]
3. Then for each element in A[n...m] check,
A[k]>B[0]
if so then number A[k] is among n large elements so,
search for proper position for A[k] in B[n] and replace and move the numbers on left in B[n] so that B[n] contains n large elements.
4. Repeat this for all elements in A[m].
At the end B[n] will have the n largest elements.

Algorithm Olympiad : conditional minimum in array

I have an array A = [a1, a2, a3, a4, a5...] and I want to find two elements of the array, say A[i] and A[j] such that i is less than j and A[j]-A[i] is minimal and positive.
The runtime has to be O(nlog(n)).
Would this code do the job:
First sort the array and keep track of the original index of each element (ie : the index of the element in the ORIGINAL (unsorted) array.
Go through the sorted array and calculate the differences between any two successive elements that verify the initial condition that the Original Index of the bigger element is bigger than the original index of the smaller element.
The answer would be the minimum value of all these differences.
Here is how this would work on an example:
A = [0, -5, 10, 1]
In this case the result should be 1 coming from the difference between A[3] and A[0].
sort A : newA=[-5,0,1,10]
since OriginalIndex(-5)>OriginalIndex(0), do not compute the difference
since OriginalIndex(1)>OriginalIndex(0), we compute the difference = 1
since OriginalIndex(10)>OriginalIndex(1), we compute the difference = 9
The result is the minimal difference, which is 1.
Contrary to the claim made in the other post there wouldn't be any problem regarding the runtime of your algorithm. Using heapsort for example the array could be sorted in O(n log n) as given as an upper bound in your question. An additional O (n) running once along the sorted array couldn't harm this any more, so you would still stay with runtime O (n log n).
Unfortunately your answer still doesn't seem to be correct as it doesn't give the correct result.
Taking a closer look at the example given you should be able to verify that yourself. The array given in your example was: A=[0,-5,10,1]
Counting from 0 choosing indices i=2 and j=3 meets the given requirement i < j as 2 < 3. Calculating the difference A[j] - A[i] which with the chosen values comes down to A[3] - A[2] calculates to 1 - 10 = -9 which is surely less than the minimal value of 1 calculated in the example application of your algorithm.
Since you're minimising the distance between elements, they must be next to each other in the sorted list (if they weren't then the element in between would be a shorter distance to one of them -> contradiction). Your algorithm runs in O(nlogn) as specified so it looks fine to me.

Resources