Let's say I have an array a of length n and a second array indices, also of length n. indices contains some arbitrary permutation of the sequence [0, n). I want to to rearrange a such that it's in the order specified by indices. For example, using D syntax:
auto a = [8, 6, 7, 5, 3, 0, 9];
auto indices = [3, 6, 2, 4, 0, 1, 5];
reindexInPlace(a, indices);
assert(a == [5, 9, 7, 3, 8, 6, 0]);
Can this be done in both O(1) space and O(n) time, preferably without mutating indices?
With mutating indices :(. Without looks hard (see stable in-place mergesort).
a = [8, 6, 7, 5, 3, 0, 9]
indices = [3, 6, 2, 4, 0, 1, 5]
for i in xrange(len(a)):
x = a[i]
j = i
while True:
k = indices[j]
indices[j] = j
if k == i:
break
a[j] = a[k]
j = k
a[j] = x
print a
This is what I call a "permute from" algorithm. In C-like language it would look as follows
for (i_dst_first = 0; i_dst_first < n; ++i_dst_first)
{
/* Check if this element needs to be permuted */
i_src = indices[i_dst_first];
assert(i_src < n);
if (i_src == i_dst_first)
/* This element is already in place */
continue;
i_dst = i_dst_first;
pending = a[i_dst];
/* Follow the permutation cycle */
do
{
a[i_dst] = a[i_src];
indices[i_dst] = i_dst;
i_dst = i_src;
i_src = indices[i_src];
assert(i_src != i_dst);
} while (i_src != i_dst_first);
a[i_dst] = pending;
indices[i_dst] = i_dst;
}
Note though that this algorithm destroys the index array. I call it "permute from" since the index[i] value specifies from where to take the i-th element of the resultant sequence.
Note also, that the number of "element move" operations required for in-place permutation of a sequence is equal to number of misplaced elements + number of cycles in the permutation. This algorithm achieves this limit, so in terms of the number of moves no better algorithm is possible.
Potential problem with this algorithm is that it is based on "juggling" approach, making its cache behavior far from optimal. So, while this algorithm is the best one in theory, it could lose to some more "practical" algorithms in real life.
One can also implement a "permute to" algorithm, where index[i] value specifies where to relocate the original i-th element.
If a is an array of integers, then an O(n)-time, O(1)-space algorithm is possible that keeps the order of permutation indices. In this case we can permute a into indexes and use a as a temporary storage of the inverse permutation. After the permutation is performed, the arrays a and indices are swapped, and indices is inverted in situ using e.g. algorithm J from TAoCP. The following is a working Java program:
int [] a = {8, 6, 7, 5, 3, 0, 9};
int [] indices = {3, 6, 2, 4, 0, 1, 5};
int n = indices.length;
int i, j, m;
// permute a and store in indices
// store inverse permutation in a
for (j = 0; j < n; ++j) {
i = indices[j]; indices[j] = a[i]; a[i] = j;
}
// swap a and indices
for (j = 0; j < n; ++j) {
i = indices[j]; indices[j] = a[j]; a[j] = i;
}
// inverse indices permutation to get the original
for (i = 0; i < n; ++i) {indices[i] = -indices[i] - 1;}
for (m = n - 1; m >= 0; --m) {
// for (i = m, j = indices[m]; j >= 0; i = j, j = indices[j]) ;
i = m; j = indices[m];
while (j >= 0) {i = j; j = indices[j];}
indices[i] = indices[-j - 1];
indices[-j - 1] = m;
}
This answers the question when indices array is mutable.
Here is a solution when it is not mutable.
void mutate(int[] input, int[] indices) {
int srcInd;
for (int tarInd = 0; tarInd < input.length; tarInd++) {
srcInd = indices[tarInd];
while(srcInd < tarInd) {
// when src is behind, it will have it's final value already and the original
// value would have been swapped with src's src pos. Keep searching for the
// original value until it is somewhere ahead of tarInd.
srcInd = indices[srcInd];
}
swap(input, srcInd, tarInd);
}
}
I think the classic way to deal with this problem is to work round the cycles, and to do this you need a marker bit per data item from somewhere. Here I pinched the top bit of the index array, which you could restore - of course this assumes that you don't have -ve array indexes or are using all bits of an unsigned number as an index. One reference for this is Knuth Volume 1 section 1.3.3 answer to question 12, which deals with the special case of transposing a matrix. Knuth gives references to slower in-place methods. The paper "Permuting in Place" by Fich, Munro, and Poblete claims nlogn time and O(1) space in the worst case.
import java.util.Arrays;
public class ApplyPerm
{
public static void reindexInPlace(int[] rearrangeThis, int[] indices)
{
final int TOP_BIT = 0x80000000;
for (int pos = 0; pos < rearrangeThis.length; pos++)
{
if ((indices[pos] & TOP_BIT) != 0)
{ // already dealt with this
continue;
}
if (indices[pos] == pos)
{ // already in place
continue;
}
// Now shift an entire cycle along
int firstValue = rearrangeThis[pos];
int currentLocation = pos;
for (;;)
{
// pick up untouched value from here
int replaceBy = indices[currentLocation];
// mark as dealt with for the next time we see it
indices[currentLocation] |= TOP_BIT;
if (replaceBy == pos)
{ // have worked our way round
rearrangeThis[currentLocation] = firstValue;
break;
}
if ((replaceBy & TOP_BIT) != 0)
{
throw new IllegalArgumentException("Duff permutation");
}
// Move value up
rearrangeThis[currentLocation] = rearrangeThis[replaceBy];
// and fill in source of value you have just moved over
currentLocation = replaceBy;
}
}
}
public static void main(String[] s)
{
int[] a = new int[] {8, 6, 7, 5, 3, 0, 9};
int[] indices = new int[] {3, 6, 2, 4, 0, 1, 5};
reindexInPlace(a, indices);
System.out.println("Result is " + Arrays.toString(a));
}
}
You can do this by hiding the values in the real array. By this way you can do this in both O(1) space and O(n) time.
Basically, you traverse through your indices array first, store the value of the indice array in the correct position. Now this can be done in the algorithm of your choice. For me, I would simply store the number's trailing bits from the Most Significant bit position. Do this in one traversal. Now the base array would be messed up.
During the second traversal store all the upper half bits to lower half.
The obvious disadvantage of this technique is that the stored integer
value can hold as much as half the bits. Meaning if you are dealing
with 4 byte integer, the values can only be of 2 bytes. However instead of using up half the array as show in the code below, it can be enhanced by using a better algorithm where you hide the value in the index array. Here you will require the max bits reserved in worst case would be the length of the array rather than constant 16 in the previous case. It will perform worst than the former when the length exceeds 2 power 16.
import java.util.Arrays;
class MyClass {
public static void main(String[] args) {
MyClass myClass = new MyClass();
int[] orig_array = {8, 6, 7, 5, 3, 0, 9};
int[] indices = {3, 6, 2, 4, 0, 1, 5};
myClass.meth(orig_array, indices);
}
public void meth(int[] orig_array, int[] indices){
for(int i=0;i<orig_array.length;i++)
orig_array[i] += orig_array[indices[i]] + orig_array[indices[i]] << 15 ;
for(int i=0;i<orig_array.length;i++)
orig_array[i] = orig_array[i] >> 16;
System.out.print(Arrays.toString(orig_array));
}
}
Here's a C++ version (it modifies the indices):
#include <algorithm>
#include <iterator>
template<class It, class ItIndices>
void permutate_from(
It const begin,
typename std::iterator_traits<It>::difference_type n,
ItIndices indices)
{
using std::swap;
using std::iter_swap;
for (typename std::iterator_traits<It>::difference_type i = 0; i != n; ++i)
{
for (typename std::iterator_traits<ItIndices>::value_type j = i; ; )
{
swap(j, indices[j]);
if (j == i) { break; }
iter_swap(begin + j, begin + indices[j]);
}
}
}
Example:
int main()
{
int items[] = { 2, 0, 1, 3 };
int indices[] = { 1, 2, 0, 3 };
permutate_from(items, 4, indices);
// Now items[] == { 0, 1, 2, 3 }
}
JavaScript version
var input = [1,2,3,4,5],
specArr = [0,2,1,4,3];
function mutate(input, specArr) {
var visited = [0,2]
for(var i=0; i<specArr.length; i++) {
var tmp;
//keep track of array items we've already looped through (wouldn't want to mutate twice :D)
visited.push(specArr[i]);
// if index hasn't changed we do nothing to input arr
if (visited.indexOf(1) < 0) {
// if it has changed temporarily store the value
tmp = input[i];
//swap input array item with spec item
input[i] = input[specArr[i]];
//swap specced array item with input item above
input[specArr[i]] = tmp;
}
}
}
mutate(input, specArr);
Related
I am stuck in this question for long time.
the question is - to move duplicates to the end of the array while preserving the order(time complexity must be O(nlogn)).
limitation:
order must be preserved.
can use only 1 helper array
the range of values in the array are not known and can be really large from n(which is the size fo the array).
values are positives.
time complexity - nlogn
example:
arr = {7, 3, 1, 2, 7, 9, 3, 2, 5, 9, 6, 2}, n = 12
result:
arr = {7, 3, 1, 2, 9, 5, 6, 2, 9, 2, 3, 7}
this what i did right now but i am stuck from thier:
copied original array to temp array
sort the temp array using quick sort
move duplicate to the end in temp array
i know that i need to do some comparison with the original array but this the part that i am stuck.
code:
int findDulpilcatesV2(int* arr, int n) {
int i, * tempArr, j = 0, countNonRepeatEl = 0, searchKeyIndex, elIndex;
tempArr = (int*)calloc(n, sizeof(int));
assert(tempArr);
// Saving original array in temp array
for (i = 0; i < n; i++)
{
tempArr[i] = arr[i];
}
// Sorting temp array
quickSort(tempArr, 0, n - 1);
// Move duplicates to the end
for (i = 0; i < n; i++)
{
if (tempArr[i] != tempArr[i+1])
{
swap(&tempArr[j], &tempArr[i]);
countNonRepeatEl++;
j++;
}
}
free(tempArr);
tempArr = NULL;
return countNonRepeatEl;
}
Two arrays (nums1 and nums2) of length m and n respectively have to be merged and the and be sorted in the array nums1. length of nums1 is m+n and last n elements in nums1 are 0.
Not to be returned, nums1 has to be modified.
LeetCode question
Example
Input
nums1 = [1, 2, 3, 0, 0, 0]
m = 3
nums2 = [2, 5, 6]
n = 3
Output [1, 2, 2, 3, 5, 6]
Explanation
The arrays we are merging are [1,2,3] and [2,5,6]. The result of the merge is [1,2,2,3,5,6] with the underlined elements coming from nums1.
I have done the question but no idea why its not working.
var merge = function(nums1, m, nums2, n) {
let j = 0
for (let i = m; i < n; i++) {
nums1[i] = nums2[j]
j++
}
};
Driven Code:
let nums1 = [1,2,3,0,0,0], m = 3, nums2 = [2,5,6], n = 3
merge(nums1, m, nums2, n)
console.log(nums1)
Output:
[1, 2, 3, 0, 0, 0]
You've misunderstood the purpose of m and n. You should step through this in the debugger, and you'll quickly notice your loop is never entered.
They're not first and last indices of the "zero region". They're the first index, and the count of zeros found after that (which is as long as the num2 array that would be merged into num1. This would be more apparent if the names weren't as useless as m and n.
A performant solution to this would involve doing this in-place (in fact, the problem outline requires that you modify num1 rather than make a new array).
You know both arrays are sorted, so "merging" only requires you pick the smallest element from each. The problem is that all your "free space" in num1 (the 0s) are located at the back, which makes it had to make room at the beginning of the array to insert elements from num2.
Instead, you can rely on this fact: An array that's sorted smallest-to-largest is also sorted when reversed, except from largest-to-smallest. While this may be obvious, it's the key trick: you fill num2 starting from the end, and working towards the start. At every step, you replace a 0 (and eventually, the other numbers)with the largest element from the "tail" ofnum1ornum2`.
Doing this all the way through gives you a time complexity of O(n) with no extra space used.
Usse concat function of array and then sort function
e.g.
num1 = num1.concat(num2)
num1.sort()
OR
num1 = num1.concat(num2).sort()
Do you searching something like this?
nums1 = [...nums1, ...nums2].sort().filter(i => i != 0);
console.log(nums1);
Steps taken here:
First contact two array
Then sort the array in ascending order
Then remove 0 from the array
Check this out:
tempNums1 = nums1.slice(0, m);
tempNums2 = nums2.slice(0, n);
arr = [...tempNums1, ...tempNums2].sort((a,b) => { return a - b; });
nums1.splice(0, nums1.length);
for(let i=arr.length - 1;i>=0;i--){
nums1.unshift(arr[i]);
}
Here the hack is splice and unshift takes effect outside of the function scope
Updated:
nums1.splice(m, nums1.length);// O(n)
for(let j=0; j< n; j++){
nums1.unshift(nums2[j]); // O(n)
}
nums1.sort((a, b) => a - b); // O(n log n)
There you go
public void merge(int[] nums1, int m, int[] nums2, int n) {
if (n == 0) {
return;
}
int i = m - 1;
int j = n - 1;
int k = nums1.length - 1;
while (i >= 0 && j >= 0) {
if (nums2[j] > nums1[i]) {
nums1[k--] = nums2[j--];
} else {
nums1[k--] = nums1[i--];
}
}
while (j >= 0) {
nums1[k--] = nums2[j--];
}
}
Given 2 Arrays of Intergers (unsorted, may contains duplicate elements), e.g.:
int[] left = {1, 5, 3};
int[] right = {2, 2};
We can get sums of subset of left array by picking or not picking up each element (2^n combinations), so, all the possbile sums could be (remove the duplicate sums):
{0, 1, 3, 4, 5, 6, 8, 9}
Same thing to the right array, sums of subset of right array are:
{0, 2, 4}
Then, the max common sum of subsets of these 2 arrays is 4, because 4 = left[0] + left[2] = rihgt[0] + right[1] and it's the max.
Question: how to get max common sum and indexes to construct this sum from 2 arrays? (if there are multipe combinations could get the same max sum in one array, just need to return one combination) Any better way to get the max common sum without caculating out all the possbile sums of subset?
I think this solution using bitsets in C++ will work.
// returns maximum possible common subset sum
int fun(int left[], int right[]){
// for the given constraints, the maximum possible sum is 10^7
bitset<10000001> b, b1;
int n = // size of left array
int m = // size of right array
b[0] = b1[0] = 1;
for(int i=0;i<n;i++){
b|=b<<left[i];
}
for(int i=0;i<m;i++){
b1|=b1<<right[i];
}
// After the above loop, b and b1 contains all possible unique values of subset sum.
// Just loop from the most significant bit and find the position in which the
// bits of both, b and b1 are set.
// That position is the maximum possible common subset sum
// For indices, any standard algorithm for finding subset-sum
// for a particular sum will do.
}
Based on the method pointed out by risingStark for finding the maximum common sum and on Print all subsets with given sum for finding indexes of summands, and since the question uses Java syntax, here's an unoptimized and unbeautified Java program with some example data sets:
import java.util.Arrays;
import java.math.BigInteger;
public class _68232965
{
static int sum;
static boolean found;
public static void main(String[] args)
{
{ int[][] lr = { {1, 5, 3}, {2, 2} }; maxcommsum(lr); }
{ int[][] lr = { {1,1,2,3,4}, {2, 2} }; maxcommsum(lr); }
{ int[][] lr = { {1, 2, 3}, {2, 2} }; maxcommsum(lr); }
{ int[][] lr = { {3,3,3,10}, {9,13} }; maxcommsum(lr); }
}
static void maxcommsum(int[][] lr)
{
for (var a: lr) System.out.println(Arrays.toString(a));
var s = new BigInteger[] { BigInteger.ONE, BigInteger.ONE };
for (int j, i = 0; i < 2; ++i)
for (j = 0; j < lr[i].length; ++j) s[i] = s[i].shiftLeft(lr[i][j]).or(s[i]);
while (s[0].bitLength() != s[1].bitLength())
{ // find the maximum common sum
int larger = s[0].bitLength() < s[1].bitLength() ? 1 : 0;
s[larger] = s[larger].clearBit(s[larger].bitLength()-1);
}
sum = s[0].bitLength()-1;
System.out.println("sum = "+sum);
for (var a: lr) { found = false; f(a, 0, 0); System.out.println("<= indexes"); }
}
static void f(int[] pat, int i, int currSum)
{ // find indexes of summands
if (currSum == sum)
{
found = true;
return;
}
if (currSum < sum && i < pat.length)
{
f(pat, i+1, currSum + pat[i]); if (found) { System.out.print(i+" "); return; }
f(pat, i+1, currSum);
}
}
}
I am working on a problem where I have an unsorted array. And I need to process this array in order to generate an index array as if it were sorted in an ascending order.
Example 1:
let's say I have an unsorted array [9, 7, 8, 6, 12]
And as an output, I need an index array [3, 1, 2, 0, 4].
Example 2:
Unsorted array : [10, 9, 11, 8, 12]
Index array should be : [ 2, 1, 3, 0, 4]
As of now, I am doing it just like old "bubble sort" where I'm comparing each and every possibility. I was wondering way to make it fast.
If you are not worried about extra space, do this:
Make an array of pairs (value, index)
Sort pairs on the value (first member) in ascending order
Harvest indexes (second member) from the sorted array of pairs
Using your data as an example, you would get this:
[{9,0}, {7,1}, {8,2}, {6,3}, {12,4}] // Step 1
[{6,3}, {7,1}, {8,2}, {9,0}, {12,4}] // Step 2
[ 3, 1, 2, 0, 4 ] // Step 3
(comment) I need is index of an unsorted array as if they were sorted in ascending order.
You can use array from step 3 to produce this output as well. Using your second example, you get this:
[{10,0}, {9,1}, {11,2}, {8,3}, {12,4}]
[ {8,3}, {9,1}, {10,0}, {11,2}, {12,4}]
[ 3, 1, 0, 2, 4 ]
Now create the output array, walk the array of indexes (i.e. [3,1,0,2,4]) and set the index of each item into the position in the result determined by the value, i.e. index 3 would get 0 because 3 is at index 0, index 1 would get 1 because 1 is at 1, index 0 would get 2 because 0 is at 2, and so on.
Here is the illustration of that additional step:
int position[] = {3, 1, 0, 2, 4};
int res[5];
for (int i = 0 ; i != 5 ; i++) {
res[position[i]] = i;
}
This produces the following array:
[2, 1, 3, 0, 4]
"Fast" means you need a sorted data structure with an complexity of O(log n) for inserts (and lookups of course). So a binary tree would do.
You create the index as an array of positions, and initialize it to the existing order: idx=[0, 1, 2, ...., n-1].
Then you sort the index array using your favorite sorting algorithm, but whenever performing a comparison, you use the values as positions to refernce the original array, instead of comparing them directly. For example, to compare the items i and j, you perform cmp(arr[idx[i]], arr[idx[j]]) instead of cmp(idx[i], idx[j]).
Did you try Radix Sort:
* Approach:
* radix sort, like counting sort and bucket sort, is an integer based
* algorithm (i.e. the values of the input array are assumed to be
* integers). Hence radix sort is among the fastest sorting algorithms
* around, in theory. The particular distinction for radix sort is that it
* creates a bucket for each cipher (i.e. digit); as such, similar to
* bucket sort, each bucket in radix sort must be a growable list that may
* admit different keys.
***************************************************************************/
import java.io.IOException;
public class RadixSort {
public static void sort( int[] a)
{
int i, m = a[0], exp = 1, n = a.length;
int[] b = new int[10];
for (i = 1; i < n; i++)
if (a[i] > m)
m = a[i];
while (m / exp > 0)
{
int[] bucket = new int[10];
for (i = 0; i < n; i++)
bucket[(a[i] / exp) % 10]++;
for (i = 1; i < 10; i++)
bucket[i] += bucket[i - 1];
for (i = n - 1; i >= 0; i--)
b[--bucket[(a[i] / exp) % 10]] = a[i];
for (i = 0; i < n; i++)
a[i] = b[i];
exp *= 10;
}
}
public static void main(String[] args) throws IOException {
int[] aa={9,7,8,6,12};
for (int i = 0; i < aa.length; i++) {
System.out.print(aa[i]+" ");
}
System.out.println();
sort(aa);
for (int i = 0; i < aa.length; i++) {
System.out.print(aa[i]+" ");
}
}
}
I came across this question on a website. As mentioned there, it was asked in amazon interview. I couldn't figure out a proper solution in given constraint.
Given an array of n integers, find 3 elements such that a[i] < a[j] < a[k] and i < j < k in O(n) time.
So here is how you can solve the problem. You need to iterate over the array three times. On the first iteration mark all the values that have an element greater than them on the right and on the second iteration mark all the elements smaller than them on their left. Now your answer would be with an element that has both:
int greater_on_right[SIZE];
int smaller_on_left[SIZE];
memset(greater_on_rigth, -1, sizeof(greater_on_right));
memset(smaller_on_left, -1, sizeof(greater_on_right));
int n; // number of elements;
int a[n]; // actual elements;
int greatest_value_so_far = a[n- 1];
int greatest_index = n- 1;
for (int i = n -2; i >= 0; --i) {
if (greatest_value_so_far > a[i]) {
greater_on_right[i] = greatest_index;
} else {
greatest_value_so_far = a[i];
greatest_index = i;
}
}
// Do the same on the left with smaller values
for (int i =0;i<n;++i) {
if (greater_on_right[i] != -1 && smaller_on_left[i] != -1) {
cout << "Indices:" << smaller_on_left[i] << ", " << i << ", " << greater_on_right[i] << endl;
}
}
This solution iterates 3 times over the whole array and is therefore linear. I have not provided the whole solution so that you can train yourself on the left to see if you get my idea. I am sorry not to give just some tips but I couldn't figure out how to give a tip without showing the actual solution.
Hope this solves your problem.
One-pass linear time, with O(1) extra space (4 variables). Very efficient (only a couple comparisons/branches per iteration, and not much data shuffling).
This is NOT my original idea or algorithm, I just tidied up and commented the code in an ideone fork. You can add new test-cases to the code there and run it online. The original is by Kenneth, posted in comments on a thread on www.geeksforgeeks.org. Great algorithm, but the original implementation had some really silly code outside of the actual loop. (e.g., instead of local variables, lets use two member-variables in a class, and implement the function as a member-function of class Solution... And the variable-names sucked. I went for quite verbose ones.)
Kenneth, if you want to post your code as an answer, go ahead. I'm not trying to steal credit for the algo. (I did put some work into writing up this explanation, and thinking through why it works, though.)
The main article above the discussion thread has the same solution as Ivaylo Strandjev's answer. (The main-article's code is what Pramod posted as an answer to this question, months after Ivalyo's answer. That's how I found the interesting answers in comments there.)
Since you only need to find a solution, not all of them, there aren't as many corner cases as you'd expect. It turns out you don't need to keep track of every possible start and middle value you've seen, or even backtrack at all, if you choose the right things to keep as state.
The main tricks are:
The last value in a sequence of monotonically decreasing values is the only one you need to consider. This applies to both first(low) and second(mid) candidate elements.
Any time you see a smaller candidate for a middle element, you can start fresh from there, just looking for either a final element or an even better mid-candidate.
If you didn't already find a sequence of 3 increasing elements before an element smaller than your current mid-candidate, min-so-far and the new smaller middle-candidate are as good (as forgiving, as flexible) as you can do out of the numbers you've already checked. (See the comments in the code for a maybe-better way of phrasing this.)
Several other answers make the mistake of starting fresh every time they see a new smallest or largest element, rather than middle. You track the current min that you've seen, but you don't react or make use of it until you see a new middle.
To find new candidate middle elements, you check if they're smaller than the current middle-candidate, and != min element seen so far.
I'm not sure if this idea can be extended to 4 or more values in sequence. Finding a new candidate 3rd value might require tracking the min between the current candidate second and third separately from the overall min. This could get tricky, and require a lot more conditionals. But if it can be done correctly with constant-size state and one pass without backtracking, it would still be linear time.
// Original had this great algorithm, but a clumsy and weird implementation (esp. the code outside the loop itself)
#include <iostream>
#include <vector>
using namespace std;
//Find a sorted subsequence of size 3 in one pass, linear time
//returns an empty list on not-found
vector<int> find3IncreasingNumbers(int * arr, int n)
{
int min_so_far = arr[0];
int c_low, c_mid; // candidates
bool have_candidates = false;
for(int i = 1; i < n; ++i) {
if(arr[i] <= min_so_far) // less-or-equal prevents values == min from ending up as mid candidates, without a separate else if()continue;
min_so_far = arr[i];
else if(!have_candidates || arr[i] <= c_mid) {
// If any sequence exists with a middle-numbers we've already seen (and that we haven't already finished)
// then one exists involving these candidates
c_low = min_so_far;
c_mid = arr[i];
have_candidates = true;
} else {
// have candidates and arr[i] > c_mid
return vector<int> ( { c_low, c_mid, arr[i] } );
}
}
return vector<int>(); // not-found
}
int main()
{
int array_num = 1;
// The code in this macro was in the original I forked. I just put it in a macro. Starting from scratch, I might make it a function.
#define TRYFIND(...) do { \
int arr[] = __VA_ARGS__ ; \
vector<int> resultTriple = find3IncreasingNumbers(arr, sizeof(arr)/sizeof(arr[0])); \
if(resultTriple.size()) \
cout<<"Result of arr" << array_num << ": " <<resultTriple[0]<<" "<<resultTriple[1]<<" "<<resultTriple[2]<<endl; \
else \
cout << "Did not find increasing triple in arr" << array_num << "." <<endl; \
array_num++; \
}while(0)
TRYFIND( {12, 11, 10, 5, 6, 2, 30} );
TRYFIND( {1, 2, 3, 4} );
TRYFIND( {4, 3, 1, 2} );
TRYFIND( {12, 1, 11, 10, 5, 4, 3} );
TRYFIND( {12, 1, 11, 10, 5, 4, 7} );
TRYFIND( {12, 11, 10, 5, 2, 4, 1, 3} );
TRYFIND( {12, 11, 10, 5, 2, 4, 1, 6} );
TRYFIND( {5,13,6,10,3,7,2} );
TRYFIND( {1, 5, 1, 5, 2, 2, 5} );
TRYFIND( {1, 5, 1, 5, 2, 1, 5} );
TRYFIND( {2, 3, 1, 4} );
TRYFIND( {3, 1, 2, 4} );
TRYFIND( {2, 4} );
return 0;
}
Making a CPP macro which can take an initializer-list as a parameter is ugly:
Is it possible to pass a brace-enclosed initializer as a macro parameter?
It was very much worth it to be able to add new test-cases easily, though, without editing arr4 to arr5 in 4 places.
I posted another approach to resolve it here.
#include<stdio.h>
// A function to fund a sorted subsequence of size 3
void find3Numbers(int arr[], int n)
{
int max = n-1; //Index of maximum element from right side
int min = 0; //Index of minimum element from left side
int i;
// Create an array that will store index of a smaller
// element on left side. If there is no smaller element
// on left side, then smaller[i] will be -1.
int *smaller = new int[n];
smaller[0] = -1; // first entry will always be -1
for (i = 1; i < n; i++)
{
if (arr[i] < arr[min])
{
min = i;
smaller[i] = -1;
}
else
smaller[i] = min;
}
// Create another array that will store index of a
// greater element on right side. If there is no greater
// element on right side, then greater[i] will be -1.
int *greater = new int[n];
greater[n-1] = -1; // last entry will always be -1
for (i = n-2; i >= 0; i--)
{
if (arr[i] > arr[max])
{
max = i;
greater[i] = -1;
}
else
greater[i] = max;
}
// Now find a number which has both a greater number on
// right side and smaller number on left side
for (i = 0; i < n; i++)
{
if (smaller[i] != -1 && greater[i] != -1)
{
printf("%d %d %d", arr[smaller[i]],
arr[i], arr[greater[i]]);
return;
}
}
// If we reach number, then there are no such 3 numbers
printf("No such triplet found");
return;
}
// Driver program to test above function
int main()
{
int arr[] = {12, 11, 10, 5, 6, 2, 30};
int n = sizeof(arr)/sizeof(arr[0]);
find3Numbers(arr, n);
return 0;
}
Just for fun:
In JAVA:
List<Integer> OrderedNumbers(int[] nums){
List<Integer> res = new LinkedList<>();
int n = nums.length;
//if less then 3 elements, return the empty list
if(n<3) return res;
//run 1 forloop to determine local min and local max for each index
int[] lMin = new int[n], lMax = new int[n];
lMin[0] = nums[0]; lMax[n-1] = nums[n-1];
for(int i=1; i<n-1; i++){
lMin[i] = Math.min(lMin[i-1], nums[i]);
lMax[n-i-1] = Math.max(lMax[n-i],nums[n-i-1]);
}
//if a condition is met where min(which always comes before nums[i] and max) < nums[i] < max, add to result set and return;
for(int i=1; i<n-1; i++){
if(lMin[i]<nums[i] && nums[i]<lMax[i]){
res.add(lMin[i]);
res.add(nums[i]);
res.add(lMax[i]);
return res;
}
}
return res;
}
This problem is very similar to computing the longest increasing subsequence, with the constraint that size of this subsequence must necessarily be equal to three. The LIS problem (with O(nlog(n)) solution) can easily be modified for this specific problem. This solution has O(n) single pass complexity with O(1) space.
This solution requires that only unique elements occur in the list. We use an online solution. As we encounter any new element, it has potential to extend the present most optimum subsequence or start a new subsequence. In this case, as the maximum length of increasing subsequence is three, any new element currently being processed can either extend a sequence of size 2 to 3 and 1 to 2. So we maintain active lists containing the most optimum elements.
In this particular problem, the maximum number of active lists we have to maintain are 2 - one of size 2 and another of size 1. As soon as we hit a list with size 3, we have our answer. We make sure each active list terminates with minimum number. For more detailed explanation of this idea, refer this.
At any point of time in the online solution, these two active lists will store the most efficient values of the list - the end of the list will be smallest element that can be placed there. Suppose the two lists are:
Size 2 list => [a,b]
Size 1 list => [c]
The initial list can be easily written (refer to the code below). Suppose the next number to be entered is d. Then cases (cascading in execution) are as follows:
Case 1: d > b.
We have our answer in this case, as a < b < d.
Case 2: b > d > a. In this the list of size 2 can be optimally represented by having end as d instead of b, as every element occurring after d greater than b will also be greater than d. So we replace b by d.
Case 3: d < c. As Case 1 and 2 fails, it automatically implies that d < a. In such a case, it may start a new list with size one. The list with size one is compared to get the most efficient active list. If this case is true, we replace c by d.
Case 4: Otherwise. This case implies that d < b and c < d. In such a case, the list of size 2 is inefficient. So we replace [a, b] by [c, d].
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int two_size_first;
int two_size_mid;
int one_size;
int end_index;
vector<int> arr;
Solution(int size) {
end_index = two_size_mid = two_size_first = one_size = -1;
int temp;
for(int i=0; i<size; i++) {
cin >> temp;
arr.push_back(temp);
}
}
void solve() {
if (arr.size() < 3)
return;
one_size = two_size_first = arr[0];
two_size_mid = INT_MAX;
for(int i=1; i<arr.size(); i++) {
if(arr[i] > two_size_mid) {
end_index = i;
return;
}
else if (two_size_first < arr[i] && arr[i] < two_size_mid) {
two_size_mid = arr[i];
}
else if (one_size > arr[i]) {
one_size = arr[i];
}
else {
two_size_first = one_size;
two_size_mid = arr[i];
}
}
}
void result() {
if (end_index != -1) {
cout << two_size_first << " " << two_size_mid << " " << arr[end_index] << endl;
}
else {
cout << "No such sequence found" << endl;
}
}
};
int main(int argc, char const *argv[])
{
int size;
cout << "Enter size" << endl;
cin >> size;
cout << "Enter " << size << " array elements" << endl;
Solution solution(size);
solution.solve();
solution.result();
return 0;
}
My Approach - O(N) time two passes O(1) space with two variables used
for each element of the array we visit we maintain the minimum possible to its left to check for whether this element may be the middle element and also keep record of minimum middle element to its left to check for whether this element may be a candidate third element or it may form a middle element with lower value than found so far.Initailise min so far and middle so far to INT_MAX,
Fr each element thus we have to check :
If a particular array element is greater than the minimum of middle element so far than this array element is the answer with thi as the third element and the min middle element as the mid element(We will have to search for the third element afterward by one pass)
Else If a particular array element is greater than the minimum so far than this element could be a candidate middle element and now we have to check if the candidate middle element is less than the current middle element if so update the current middle element
ELSE If a particular array element is less than the minimum so far then update the minimum so far with arr[i] .
So this way for each element of the array we visit we maintain the minimum possible to its left to check for whether this element may be the middle element and also keep record of minimum middle element to its left to check for whether this element may be a candidate third element or it may form a middle element with lower value than found so far.
#include
using namespace std;
int main()
{
int i,j,k,n;
cin >> n;
int arr[n];
for(i = 0;i < n;++i)
cin >> arr[i];
int m = INT_MAX,sm = INT_MAX,smi;// m => minimum so far found to left
for(i = 0;i < n;++i)// sm => smallest middle element found so far to left
{
if(arr[i]>sm){break;}// This is the answer
else if(arr[i] < m ){m = arr[i];}
else if(arr[i] > m){if(arr[i]<sm){sm = arr[i];smi = i;}}
else {;}
}
if((i < n)&&(arr[i]>sm))
{
for(j = 0;j < smi;++j){if(arr[j] < sm){cout << arr[j] << " ";break;}}
cout << sm << " " << arr[i]<< endl;
}
else
cout << "Such Pairs Do Not Exist" << endl;
return 0;
}
Here is my O(n) solution with O(1) space complexity:-
Just a function which returns a vector consisiting of three values(if exixts)
`vector<int> find3Numbers(vector<int> A, int N)
{
int first=INT_MAX,second=INT_MAX,third=INT_MAX,i,temp=-1;
vector<int> ans;
for(i=0;i<N;i++)
{
if(first!=INT_MAX&&second!=INT_MAX&&third!=INT_MAX)
{
ans.push_back(first);
ans.push_back(second);
ans.push_back(third);
return ans;
}
if(A[i]<=first)
{
if(second!=INT_MAX)
{
if(temp==-1)
{
temp=first;
}
first=A[i];
}
else
{
first=A[i];
}
}
else if(A[i]<=second)
{
second=A[i];
temp=-1;
}
else
{
if(temp!=-1)
{
first=temp;
}
third=A[i];
}
}
if(first!=INT_MAX&&second!=INT_MAX&&third!=INT_MAX)
{
ans.push_back(first);
ans.push_back(second);
ans.push_back(third);
return ans;
}
return ans;
}`
Here is O(n) time and O(1) space complexity solution for this problem
bool increasingTriplet(vector<int>& a) {
int i,n=a.size(),first=INT_MAX,second=INT_MAX;
if(n<3)
return false;
for(i=0;i<n;i++)
{
if(a[i]<=first)
first = a[i];
else if(a[i]<=second)
second = a[i];
else
return true;
}
return false;
}
This function returns true if there exists a pair of 3 elements which are in sorted increasing order in array.
You can also modify this function to print all 3 elements or their indexes. Just update their indexes as well along with variable first and second.
My solution below.
public boolean increasingTriplet(int[] nums) {
int min1 = Integer.MAX_VALUE;
int min2 = Integer.MAX_VALUE;
for (int i =0; i<nums.length; i++) {
if (nums[i]<min1) {
min1 = nums[i];
} else if (nums[i]<min2 && nums[i]>min1) {
min2=nums[i];
} else if (nums[i]>min2) {
return true;
}
}
return false;
}
Try to create two variables:
1. index_sequence_length_1 = index i such
a[i] is minimal number
2. index_sequence_length_2 = index j such
There is index i < j such that a[i] < a[j] and a[j] is minimal
Iterate over whole array and update this variables in each iteration.
If you iterate over element that is greater than a[index_sequence_length_2], than you found your sequence.
Sorry, i couldn't resist but to solve the puzzle...
Here is my solution.
//array indices
int i, j, k = -1;
//values at those indices
int iv, jv, kv = 0;
for(int l=0; l<a.length(); l++){
//if there is a value greater than the biggest value
//shift all values from k to i
if(a[l]>kv || j == -1 || i == -1){
i = j;
iv = jv;
j = k;
jv = kv
kv = a[l]
k = l
}
if(iv < jv && jv < kv && i < j && j < k){
break;
}
}
Iterate once and done:
public static int[] orderedHash(int[] A){
int low=0, mid=1, high=2;
for(int i=3; i<A.length; i++){
if(A[high]>A[mid] && A[mid]>A[low])
break;
if(A[low]>A[i])
low=mid=high=i;
else if(low == mid && mid == high)
mid = high = i;
else if(mid == high){
if(A[high]<A[i])
high = i;
else
mid = high = i;
}
else if(A[mid]<A[i])
high = i;
else if( A[high]<A[i]){
mid = high;
high =i;
}
else
mid=high=i;
}
return new int[]{A[low],A[mid],A[high]};
}//
Then test with main:
public static void main(String[] args) {
int[][] D = {{1, 5, 5, 3, 2, 10},
{1, 5, 5, 6, 2, 10},
{1, 10, 5, 3, 2, 6, 12},
{1, 10, 5, 6, 8, 12, 1},
{1, 10, 5, 12, 1, 2, 3, 40},
{10, 10, 10, 3, 4, 5, 7, 9}};
for (int[] E : D) {
System.out.format("%s GIVES %s%n", Arrays.toString(E), Arrays.toString(orderedHash(E)));
}
}
What if you build a max-heap O(n) and then do Extract-Max O(1) 3 times?
Here is a solution with only one iteration.
I am using stack to compute for each index k whether there exists two other indices i & j such that a[i] < a[j] < a[k].
bool f(vector<int> a) {
int n = a.size();
stack<int> s;
for (int i = 0; i < n; ++i)
{
while(!s.empty() and a[s.top()]>=a[i]){
s.pop();
}
if (s.size()>=2) // s.size()>=k-1
{
return 1;
}
s.push(i);
}
return 0;
}
And important thing is that we can extend this problem to M such indices in the general case instead of k indices.