find minium of increasing subseqeunces in permutation - arrays

i want to find one minimum of increasing subsequences List<int[] in a permutation 1-n. So i just needn't a number but the concrete subsequences.
Currently I search the longest increasing subsequence, delete these elements and search again, until no element left. But this isn't optimal.
Example:
1,4,3,2,5 -> 3 -> 1,2,5 4 3
1,4,5,2,3 -> 2 -> 1,4,5 2,3

Related

Can iterative merge sort be used to count inversions?

I know that recursive merge sort works well to count inversions.
For example if I want to count the number of smaller elements on the right of the current element, would iterative merge sort work like recursive does?
For example:
[5,2,6,1] -> [2,1,1,0]
5 has smaller elements on the right [2,1]
2 has 1 smaller element on the right [1]
6 has 1 smaller element on the right [1]
1 has no smaller elements on the right

Big O of the fastest algo to print the common elements between an unsorted array of length n and sorted array of length m

My thinking is that it would take O(nm)
This is because
1 2 3 4
4 3 2 1
In order to find the common elements, you would iterate through the sorted array and the unsorted array(in this case the top array is sorted). Worst case, the unsorted array would be the sorted array except reversed. Therefore, you would compare 1,4 followed by 1,3 followed by ... etc. You would then compare 2,4 then 2,3 etc.
Therefore, you would end up with O(nm)
Is this correct?
The fastest solution puts the elements of the smaller array into a hash table, then looks up the elements of the larger table. In practice this is O (max (n, m)).
Credits to j_random_hacker,
It should be O(log(n)*m)
You can look up each item from the unsorted array in the sorted array using binary search, which is log(n) time. For each item in the unsorted array, this should be O(log(n)*m) time.

Permutation of number by desired order

I want to generate an algorithm for permutation of a list of distinct numbers in a specific order.
example :-
The numbers are
1 2 3 4
Order for permutation is
3 1 4 2
i.e. after permutation first number will go to third place, second to first place, third to fourth place and fourth to second place.
Now the sequence for the numbers will be
2 4 1 3
Now if the algorithm continues to do permutation by same order then after some iteration it will generate the same sequence of inputted numbers and it will stop. For this case total number of iteration is 4.
2 4 1 3
4 3 2 1
3 1 4 2
1 2 3 4
I am doing this by taking another array tmp[] with two other arrays named number[] and order[]. Now I am just copying the elements of number[] in tmp[] by maintaining the position order for particular element from order[] and checking for same number sequence before next iteration. If another iteration is needed then
number[]=tmp[] and the algorithm will repeat previous steps.
Now if the number of elements are large E.g. 10^7 or higher then this method will run slow.
Is there any better solution to find the number of iteration?
If you want to generate the permutation, your solution is already optimal because its complexity equals the size of the output.
However if you are just interested in the number of distinct permutations you can generate you can do much better:
decompose your "order" in cycles: for instance 3 1 4 2 is one cycle 1 -> 3 -> 4 -> 2 -> 1 but 2 1 4 3 is two cycles 1 -> 2 -> 1 and 3 -> 4 -> 3
The number of distinct permutations is lcm(n1, …, np) where n1, …, np are the length of the cycles and lcm is least common multiple.

Maximizing a particular sum over all possible subarrays

Consider an array like this one below:
{1, 5, 3, 5, 4, 1}
When we choose a subarray, we reduce it to the lowest number in the subarray. For example, the subarray {5, 3, 5} becomes {3, 3, 3}. Now, the sum of the subarray is defined as the sum of the resultant subarray. For example, {5, 3, 5} the sum is 3 + 3 + 3 = 9. The task is to find the largest possible sum that can be made from any subarray. For the above array, the largest sum is 12, given by the subarray {5, 3, 5, 4}.
Is it possible to solve this problem in time better than O(n2)?
I believe that I have an algorithm for this that runs in O(n) time. I'll first describe an unoptimized version of the algorithm, then give a fully optimized version.
For simplicity, let's initially assume that all values in the original array are distinct. This isn't true in general, but it gives a good starting point.
The key observation behind the algorithm is the following. Find the smallest element in the array, then split the array into three parts - all elements to the left of the minimum, the minimum element itself, and all elements to the right of the minimum. Schematically, this would look something like
+-----------------------+-----+-----------------------+
| left values | min | right values |
+-----------------------+-----+-----------------------+
Here's the key observation: if you take the subarray that gives the optimum value, one of three things must be true:
That array consists of all the values in the array, including the minimum value. This has total value min * n, where n is the number of elements.
That array does not include the minimum element. In that case, the subarray has to be purely to the left or to the right of the minimum value and cannot include the minimum value itself.
This gives a nice initial recursive algorithm for solving this problem:
If the sequence is empty, the answer is 0.
If the sequence is nonempty:
Find the minimum value in the sequence.
Return the maximum of the following:
The best answer for the subarray to the left of the minimum.
The best answer for the subarray to the right of the minimum.
The number of elements times the minimum.
So how efficient is this algorithm? Well, that really depends on where the minimum elements are. If you think about it, we do linear work to find the minimum, then divide the problem into two subproblems and recurse on each. This is the exact same recurrence you get when considering quicksort. This means that in the best case it will take Θ(n log n) time (if we always have the minimum element in the middle of each half), but in the worst case it will take Θ(n2) time (if we always have the minimum value purely on the far left or the far right.
Notice, however, that all of the effort we're spending is being used to find the minimum value in each of the subarrays, which takes O(k) time for k elements. What if we could speed this up to O(1) time? In that case, our algorithm would do a lot less work. More specifically, it would do only O(n) work. The reason for this is the following: each time we make a recursive call, we do O(1) work to find the minimum element, then remove that element from the array and recursively process the remaining pieces. Each element can therefore be the minimum element of at most one of the recursive calls, and so the total number of recursive calls can't be any greater than the number of elements. This means that we make at most O(n) calls that each do O(1) work, which gives a total of O(1) work.
So how exactly do we get this magical speedup? This is where we get to use a surprisingly versatile and underappreciated data structure called the Cartesian tree. A Cartesian tree is a binary tree created out of a sequence of elements that has the following properties:
Each node is smaller than its children, and
An inorder walk of the Cartesian tree gives back the elements of the sequence in the order in which they appear.
For example, the sequence 4 6 7 1 5 0 2 8 3 has this Cartesian tree:
0
/ \
1 2
/ \ \
4 5 3
\ /
6 8
\
7
And here's where we get the magic. We can immediately find the minimum element of the sequence by just looking at the root of the Cartesian tree - that takes only O(1) time. Once we've done that, when we make our recursive calls and look at all the elements to the left of or to the right of the minimum element, we're just recursively descending into the left and right subtrees of the root node, which means that we can read off the minimum elements of those subarrays in O(1) time each. Nifty!
The real beauty is that it is possible to construct a Cartesian tree for a sequence of n elements in O(n) time. This algorithm is detailed in this section of the Wikipedia article. This means that we can get a super fast algorithm for solving your original problem as follows:
Construct a Cartesian tree for the array.
Use the above recursive algorithm, but use the Cartesian tree to find the minimum element rather than doing a linear scan each time.
Overall, this takes O(n) time and uses O(n) space, which is a time improvement over the O(n2) algorithm you had initially.
At the start of this discussion, I made the assumption that all array elements are distinct, but this isn't really necessary. You can still build a Cartesian tree for an array with non-distinct elements in it by changing the requirement that each node is smaller than its children to be that each node is no bigger than its children. This doesn't affect the correctness of the algorithm or its runtime; I'll leave that as the proverbial "exercise to the reader." :-)
This was a cool problem! I hope this helps!
Assuming that the numbers are all non-negative, isn't this just the "maximize the rectangle area in a histogram" problem? which has now become famous...
O(n) solutions are possible. This site: http://blog.csdn.net/arbuckle/article/details/710988 has a bunch of neat solutions.
To elaborate what I am thinking (it might be incorrect) think of each number as histogram rectangle of width 1.
By "minimizing" a subarray [i,j] and adding up, you are basically getting the area of the rectangle in the histogram which spans from i to j.
This has appeared before on SO: Maximize the rectangular area under Histogram, you find code and explanation, and a link to the official solutions page (http://www.informatik.uni-ulm.de/acm/Locals/2003/html/judge.html).
The following algorithm I tried will have the order of the algorithm which is initially used to sort the array. For example, if the initial array is sorted with binary tree sort, it will have O(n) in best case and O(n log n) as an average case.
Gist of algorithm:
The array is sorted. The sorted values and the correponding old indices are stored. A binary search tree is created from the corresponding older indices which is used to determine how far it can go forwards and backwards without encountering a value less than the current value, which will result in the maximum possible sub array.
I will explain the method with the array in the question [1, 5, 3, 5, 4, 1]
1 5 3 5 4 1
-------------------------
array indices => 0 1 2 3 4 5
-------------------------
This array is sorted. Store the value and their indices in ascending order, which will be as follows
1 1 3 4 5 5
-------------------------
original array indices => 0 5 2 4 1 3
(referred as old_index) -------------------------
It is important to have a reference to both the value and their old indices; like an associative array;
Few terms to be clear:
old_index refers to the corresponding original index of an element (that is index in original array);
For example, for element 4, old_index is 4; current_index is 3;
whereas, current_index refers to the index of the element in the sorted array;
current_array_value refers to the current element value in the sorted array.
pre refers to inorder predecessor; succ refers to inorder successor
Also, min and max values can be got directly, from first and last elements of the sorted array, which are min_value and max_value respectively;
Now, the algorithm is as follows which should be performed on sorted array.
Algorithm:
Proceed from the left most element.
For each element from the left of the sorted array, apply this algorithm
if(element == min_value){
max_sum = element * array_length;
if(max_sum > current_max)
current_max = max_sum;
push current index into the BST;
}else if(element == max_value){
//here current index is the index in the sorted array
max_sum = element * (array_length - current_index);
if(max_sum > current_max)
current_max = max_sum;
push current index into the BST;
}else {
//pseudo code steps to determine maximum possible sub array with the current element
//pre is inorder predecessor and succ is inorder successor
get the inorder predecessor and successor from the BST;
if(pre == NULL){
max_sum = succ * current_array_value;
if(max_sum > current_max)
current_max = max_sum;
}else if (succ == NULL){
max_sum = (array_length - pre) - 1) * current_array_value;
if(max_sum > current_max)
current_sum = max_sum;
}else {
//find the maximum possible sub array streak from the values
max_sum = [((succ - old_index) - 1) + ((old_index - pre) - 1) + 1] * current_array_value;
if(max_sum > current_max)
current_max = max_sum;
}
}
For example,
original array is
1 5 3 5 4 1
-------------------------
array indices => 0 1 2 3 4 5
-------------------------
and the sorted array is
1 1 3 4 5 5
-------------------------
original array indices => 0 5 2 4 1 3
(referred as old_index) -------------------------
After first element:
max_sum = 6 [it will reduce to 1*6]
0
After second element:
max_sum = 6 [it will reduce to 1*6]
0
\
5
After third element:
0
\
5
/
2
inorder traversal results in: 0 2 5
applying the algorithm,
max_sum = [((succ - old_index) - 1) + ((old_index - pre) - 1) + 1] * current_array_value;
max_sum = [((5-2)-1) + ((2-0)-1) + 1] * 3
= 12
current_max = 12 [the maximum possible value]
After fourth element:
0
\
5
/
2
\
4
inorder traversal results in: 0 2 4 5
applying the algorithm,
max_sum = 8 [which is discarded since it is less than 12]
After fifth element:
max_sum = 10 [reduces to 2 * 5, discarded since it is less than 8]
After last element:
max_sum = 5 [reduces to 1 * 5, discarded since it is less than 8]
This algorithm will have the order of the algorithm which is initially used to sort the array. For example, if the initial array is sorted with binary sort, it will have O(n) in best case and O(n log n) as an average case.
The space complexity will be O(3n) [O(n + n + n), n for sorted values, another n for old indices, and another n for constructing the BST]. However, I'm not sure about this. Any feedback on the algorithm is appreciated.

Minimum Number of Operations to make an array sorted

I have been trying this problem on spoj, not able to come up with the correct approach.
What is the correct algo to solve the problem ?
You should find longest consecutive increasing subsequence, which can be done in O(n log n) (by sorting array), after that, the number of changes needed is N - longest consecutive increasing subsequence. Note that by consecutive I mean there order in sorted array.
e.g:
1 7 6 2 5 4 3 => 1-2-3 is longest consecutive increasing subsequence,
number of moves needed is 4.
1 6 4 3 5 2 7 => 1-2 or 4-5 or 6-7 is longest consecutive increasing
subsequence, note that 1-4-5-7 is longest increasing subsequence but
number of moves needed is 5 not 3.
Why this works:
Best algorithm does not changes some of a items places, call biggest subsequence without changes as X, you wont change the position of X items related to each other during operations, so they should be sorted in increasing mode. But because you just allowed to move some items in the first or the last of array, you can't put any item between X items (note that we assumed X is biggest unchanged subsequence during operations), So there should be no gap between X items. so they should be sorted and consecutive in sorted format.
So number of changes needed couldn't be smaller than the N- length of X, but also is not hard to do your job with N-length of X operation.

Resources