Interleave array in constant space - arrays

Suppose we have an array
a1, a2,... , an, b1, b2, ..., bn.
The goal is to change this array to
a1, b1, a2, b2, ..., an, bn in O(n) time and in O(1) space.
In other words, we need a linear-time algorithm to modify the array in place, with no more than a constant amount of extra storage.
How can this be done?

This is the sequence and notes I worked out with pen and paper. I think it, or a variation, will hold for any larger n.
Each line represents a different step and () signifies what is being moved this step and [] is what has been moved from last step. The array itself is used as storage and two pointers (one for L and one for N) are required to determine what to move next. L means "letter line" and N is "number line" (what is moved).
A B C D 1 2 3 4
L A B C (D) 1 2 3 4 First is L, no need to move last N
N A B C (3) 1 2 [D] 4
L A B (C) 2 1 [3] D 4
N A B 1 (2) [C] 3 D 4
L A (B) 1 [2] C 3 D 4
N A (1) [B] 2 C 3 D 4
A [1] B 2 C 3 D 4 Done, no need to move A
Note the varying "pointer jumps" - the L pointer always decrements by 1 (as it can not be eaten into faster than that), but the N pointer jumps according to if it "replaced itself" (in spot, jump down two) or if it swapped something in (no jump, so the next something can get its go!).

This problem isn't as easy as it seems, but after some thought, the algorithm to accomplish this isn't too bad. You'll notice the first and last element are already in place, so we don't need to worry about them. We will keep a left index variable which represents the first item in the first half of the array that needs changed. After that we set a right index variable to the first item in the 2nd half of the array that needs changed. Now all we do is swap the item at the right index down one-by-one until it reaches the left index item. Increment the left index by 2 and the right index by 1, and repeat until the indexes overlap or the left goes past the right index (the right index will always end on the last index of the array). We increment the left index by two every time because the item at left + 1 has already naturally fallen into place.
Pseudocode
Set left index to 1
Set right index to the middle (array length / 2)
Swap the item at the right index with the item directly preceding it until it replaces the item at the left index
Increment the left index by 2
Increment the right index by 1
Repeat 3 through 5 until the left index becomes greater than or equal to the right index
Interleaving algorithm in C(#)
protected void Interleave(int[] arr)
{
int left = 1;
int right = arr.Length / 2;
int temp;
while (left < right)
{
for (int i = right; i > left; i--)
{
temp = arr[i];
arr[i] = arr[i - 1];
arr[i - 1] = temp;
}
left += 2;
right += 1;
}
}
This algorithm uses O(1) storage (with the temp variable, which could be eliminated using the addition/subtraction swap technique) I'm not very good at runtime analysis, but I believe this is still O(n) even though we're performing many swaps. Perhaps someone can further explore its runtime analysis.

First, the theory: Rearrange the elements in 'permutation cycles'. Take an element and place it at its new position, displacing the element that is currently there. Then you take that displaced element and put it in its new position. This displaces yet another element, so rinse and repeat. If the element displaced belongs to the position of the element you first started with, you have completed one cycle.
Actually, yours is a special case of the question I asked here, which was: How do you rearrange an array to any given order in O(N) time and O(1) space? In my question, the rearranged positions are described by an array of numbers, where the number at the nth position specifies the index of the element in the original array.
However, you don't have this additional array in your problem, and allocating it would take O(N) space. Fortunately, we can calculate the value of any element in this array on the fly, like this:
int rearrange_pos(int x) {
if (x % 2 == 0) return x / 2;
else return (x - 1) / 2 + n; // where n is half the size of the total array
}
I won't duplicate the rearranging algorithm itself here; it can be found in the accepted answer for my question.
Edit: As Jason has pointed out, the answer I linked to still needs to allocate an array of bools, making it O(N) space. This is because a permutation can be made up of multiple cycles. I've been trying to eliminate the need for this array for your special case, but without success.. There doesn't seem to be any usable pattern. Maybe someone else can help you here.

It's called in-place in-shuffle problem. Here is its implementation in C++ based on here.
void in_place_in_shuffle(int arr[], int length)
{
assert(arr && length>0 && !(length&1));
// shuffle to {5, 0, 6, 1, 7, 2, 8, 3, 9, 4}
int i,startPos=0;
while(startPos<length)
{
i=_LookUp(length-startPos);
_ShiftN(&arr[startPos+(i-1)/2],(length-startPos)/2,(i-1)/2);
_PerfectShuffle(&arr[startPos],i-1);
startPos+=(i-1);
}
// local swap to {0, 5, 1, 6, 2, 7, 3, 8, 4, 9}
for (int i=0; i<length; i+=2)
swap(arr[i], arr[i+1]);
}
// cycle
void _Cycle(int Data[],int Lenth,int Start)
{
int Cur_index,Temp1,Temp2;
Cur_index=(Start*2)%(Lenth+1);
Temp1=Data[Cur_index-1];
Data[Cur_index-1]=Data[Start-1];
while(Cur_index!=Start)
{
Temp2=Data[(Cur_index*2)%(Lenth+1)-1];
Data[(Cur_index*2)%(Lenth+1)-1]=Temp1;
Temp1=Temp2;
Cur_index=(Cur_index*2)%(Lenth+1);
}
}
// loop-move array
void _Reverse(int Data[],int Len)
{
int i,Temp;
for(i=0;i<Len/2;i++)
{
Temp=Data[i];
Data[i]=Data[Len-i-1];
Data[Len-i-1]=Temp;
}
}
void _ShiftN(int Data[],int Len,int N)
{
_Reverse(Data,Len-N);
_Reverse(&Data[Len-N],N);
_Reverse(Data,Len);
}
// perfect shuffle of satisfying [Lenth=3^k-1]
void _PerfectShuffle(int Data[],int Lenth)
{
int i=1;
if(Lenth==2)
{
i=Data[Lenth-1];
Data[Lenth-1]=Data[Lenth-2];
Data[Lenth-2]=i;
return;
}
while(i<Lenth)
{
_Cycle(Data,Lenth,i);
i=i*3;
}
}
// look for 3^k that nearnest to N
int _LookUp(int N)
{
int i=3;
while(i<=N+1) i*=3;
if(i>3) i=i/3;
return i;
}
Test:
int arr[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
int length = sizeof(arr)/sizeof(int);
in_place_in_shuffle(arr, length);
After this, arr[] will be {0, 5, 1, 6, 2, 7, 3, 8, 4, 9}.

If you can transform the array into a linked-list first, the problem becomes trivial.

Related

How to insert an element starting the iteration from the beginning of the array in c?

I have seen insertion of element in array starting iteration from the rear end. But i wonder if it is possible to insert from the front
I finally figured out a way, Here goes the code
#include <stdio.h>
int main()
{
int number = 5; //element to be inserted
int array[10] = {1, 2, 3, 4, 6, 7, 8, 9};
int ele, temp;
int pos = 4; // position to insert
printf("Array before insertion:\n");
for (int i = 0; i < 10; i++)
{
printf("%d ", array[i]);
}
puts("");
for (int i = pos; i < 10; i++)
{
if (i == pos) // first element
{
ele = array[i + 1];
array[i + 1] = array[i];
}
else // rest of the elements
{
temp = array[i + 1];
array[i + 1] = ele;
ele = temp;
}
}
array[pos] = number; // element to be inserted
printf("Array after insertion:\n");
for (int i = 0; i < 10; i++)
{
printf("%d ", array[i]);
}
return 0;
}
The output looks like:
Array before insertion:
1 2 3 4 6 7 8 9 0 0
Array after insertion:
1 2 3 4 5 6 7 8 9 0
In C the arrays have a "native" built-in implementation based upon the address (aka pointer) to the first element and a the [] operator for element addressing.
Once an array has been allocated, its actual size is not automatically handled or checked: the code needs to make sure boundaries are not trespassed.
Moreover, in C there is no default (aka empty) value for any variable, there included arrays and array element.
Still, in C there's no such a thing like insertion, appending or removal of an array element. You can simply refer to the n-th (with n starting at 0) array element by using the [] operator.
So, if you have an array, you cannot insert a new item at its n-th position. You can only read or (over)write any of its items.
Any other operation, like inserting or removing, requires ad-hoc code which basically boils down to shifting the arrays elements forward (for making room for insertion) or backward (for removing one).
This is the C-language nature and should not be seen as a limitation: any other language allowing for those array operations must have a lower-level hidden implementation or a non-trivial data structure to implement the arrays.
This means, in C, that while keeping the memory usage to a bare minimum, those array operations require some time-consuming implementation, like the item-shifting one.
You can then trade-off the memory usage against the time usage and get some gains in overall efficiency by using, for example, single- and double-linked lists. You loose some memory for link pointer(s) in favor of faster insertion ad removal operations. This depends mostly upon the implementation goals.
Finally, to get to the original question, an actual answer requires some extra details about the memory vs time trade off that can be done to achieve the goal.
The solution depicted by #Krishna Acharya is a simple shift-based solution with no boundary check. A very simple and somehow naive implementation.
A final note. The 0s shown by Krishna's code at the end of the arrays should be considered merely random values. As I said earlier, there is no default value.
The code should have been instead:
int array[10] = {1, 2, 3, 4, 6, 7, 8, 9, 0, 0};
in order to make sure that any unused value was 0 for the last two array elements.

Sum of distance between every pair of same element in an array

I have an array [a0,a1,...., an] I want to calculate the sum of the distance between every pair of the same element.
1)First element of array will always be zero.
2)Second element of array will be greater than zero.
3) No two consecutive elements can be same.
4) Size of array can be upto 10^5+1 and elements of array can be from 0 to 10^7
For example, if array is [0,2,5 ,0,5,7,0] then distance between first 0 and second 0 is 2*. distance between first 0 and third 0 is 5* and distance between second 0 and third 0 is 2*. distance between first 5 and second 5 is 1*. Hence sum of distances between same element is 2* + 5* + 2* + 1* = 10;
For this I tried to build a formula:- for every element having occurence more than 1 (0 based indexing and first element is always zero)--> sum = sum + (lastIndex - firstIndex - 1) * (NumberOfOccurence - 1)
if occurence of element is odd subtract -1 from sum else leave as it is. But this approach is not working in every case.
,,But this approach works if array is [0,5,7,0] or if array is [0,2,5,0,5,7,0,1,2,3,0]
Can you suggest another efficient approach or formula?
Edit :- This problem is not a part of any coding contest, it's just a little part of a bigger problem
My method requires space that scales with the number of possible values for elements, but has O(n) time complexity.
I've made no effort to check that the sum doesn't overflow an unsigned long, I just assume that it won't. Same for checking that any input values are in fact no more than max_val. These are details that would have to be addressed.
For each possible value, it keeps track of how much would be added to the sum if one of that element is encountered in total_distance. In instances_so_far, it keeps track of how many instances of a value have already been seen. This is how much would be added to total_distance each step. To make this more efficient, the last index at which a value was encountered is tracked, such that total_distance need only be added to when that particular value is encountered, instead of having nested loops that add every value at every step.
#include <stdio.h>
#include <stddef.h>
// const size_t max_val = 15;
const size_t max_val = 10000000;
unsigned long instances_so_far[max_val + 1] = {0};
unsigned long total_distance[max_val + 1] = {0};
unsigned long last_index_encountered[max_val + 1];
// void print_array(unsigned long *array, size_t len) {
// printf("{");
// for (size_t i = 0; i < len; ++i) {
// printf("%lu,", array[i]);
// }
// printf("}\n");
// }
unsigned long get_sum(unsigned long *array, size_t len) {
unsigned long sum = 0;
for (size_t i = 0; i < len; ++i) {
if (instances_so_far[array[i]] >= 1) {
total_distance[array[i]] += (i - last_index_encountered[array[i]]) * instances_so_far[array[i]] - 1;
}
sum += total_distance[array[i]];
instances_so_far[array[i]] += 1;
last_index_encountered[array[i]] = i;
// printf("inst ");
// print_array(instances_so_far, max_val + 1);
// printf("totd ");
// print_array(total_distance, max_val + 1);
// printf("encn ");
// print_array(last_index_encountered, max_val + 1);
// printf("sums %lu\n", sum);
// printf("\n");
}
return sum;
}
unsigned long test[] = {0,1,0,2,0,3,0,4,5,6,7,8,9,10,0};
int main(void) {
printf("%lu\n", get_sum(test, sizeof(test) / sizeof(test[0])));
return 0;
}
I've tested it with a few of the examples here, and gotten the answers I expected.
I had to use static storage for the arrays because they overflowed the stack if put there.
I've left in the commented-out code I used for debugging, it's helpful to understand what's going on, if you reduce max_val to a smaller number.
Please let me know if you find a counter-example that fails.
Here is Python 3 code for your problem. This works on all the examples given in your question and in the comments--I included the test code.
This works by looking at how each consecutive pair of repeated elements adds to the overall sum of distances. If the list has 6 elements, the pair distances are:
x x x x x x The repeated element's locations in the array
-- First, consecutive pairs
--
--
--
--
----- Now, pairs that have one element inside
-----
-----
-----
-------- Now, pairs that have two elements inside
--------
--------
----------- Now, pairs that have three elements inside
-----------
-------------- Now, pairs that have four elements inside
If we look down between each consecutive pair, we see that it adds to the overall sum of all pairs:
5 8 9 8 5
And if we look at the differences between those values we get
3 1 -1 -3
Now if we use my preferred definition of "distance" for a pairs, namely the difference of their indices, we can use those multiplicities for consecutive pairs to calculate the overall sum of distances for all pairs. But since your definition is not mine, we calculate the sum for my definition then adjust it for your definition.
This code makes one pass through the original array to get the occurrences for each element value in the array, then another pass through those distinct element values. (I used the pairwise routine to avoid another pass through the array.) That makes my algorithm O(n) in time complexity, where n is the length of the array. This is much better than the naive O(n^2). Since my code builds an array of the repeated elements, once per unique element value, this has space complexity of at worst O(n).
import collections
import itertools
def pairwise(iterable):
"""s -> (s0,s1), (s1,s2), (s2, s3), ..."""
a, b = itertools.tee(iterable)
next(b, None)
return zip(a, b)
def sum_distances_of_pairs(alist):
# Make a dictionary giving the indices for each element of the list.
element_ndxs = collections.defaultdict(list)
for ndx, element in enumerate(alist):
element_ndxs[element].append(ndx)
# Sum the distances of pairs for each element, using my def of distance
sum_of_all_pair_distances = 0
for element, ndx_list in element_ndxs.items():
# Filter out elements not occurring more than once and count the rest
if len(ndx_list) < 2:
continue
# Sum the distances of pairs for this element, using my def of distance
sum_of_pair_distances = 0
multiplicity = len(ndx_list) - 1
delta_multiplicity = multiplicity - 2
for ndx1, ndx2 in pairwise(ndx_list):
# Update the contribution of this consecutive pair to the sum
sum_of_pair_distances += multiplicity * (ndx2 - ndx1)
# Prepare for the next consecutive pair
multiplicity += delta_multiplicity
delta_multiplicity -= 2
# Adjust that sum of distances for the desired definition of distance
cnt_all_pairs = len(ndx_list) * (len(ndx_list) - 1) // 2
sum_of_pair_distances -= cnt_all_pairs
# Add that sum for this element into the overall sum
sum_of_all_pair_distances += sum_of_pair_distances
return sum_of_all_pair_distances
assert sum_distances_of_pairs([0, 2, 5, 0, 5, 7, 0]) == 10
assert sum_distances_of_pairs([0, 5, 7, 0]) == 2
assert sum_distances_of_pairs([0, 2, 5, 0, 5, 7, 0, 1, 2, 3, 0]) == 34
assert sum_distances_of_pairs([0, 0, 0, 0, 1, 2, 0]) == 18
assert sum_distances_of_pairs([0, 1, 0, 2, 0, 3, 4, 5, 6, 7, 8, 9, 0, 10, 0]) == 66
assert sum_distances_of_pairs([0, 1, 0, 2, 0, 3, 0, 4, 5, 6, 7, 8, 9, 10, 0]) == 54

Convert Each Element Of Array with max element [duplicate]

Given an array A with n
integers. In one turn one can apply the
following operation to any consecutive
subarray A[l..r] : assign to all A i (l <= i <= r)
median of subarray A[l..r] .
Let max be the maximum integer of A .
We want to know the minimum
number of operations needed to change A
to an array of n integers each with value
max.
For example, let A = [1, 2, 3] . We want to change it to [3, 3, 3] . We
can do this in two operations, first for
subarray A[2..3] (after that A equals to [1,
3, 3] ), then operation to A[1..3] .
Also,median is defined for some array A as follows. Let B be the same
array A , but sorted in non-decreasing
order. Median of A is B m (1-based
indexing), where m equals to (n div 2)+1 .
Here 'div' is an integer division operation.
So, for a sorted array with 5 elements,
median is the 3rd element and for a sorted
array with 6 elements, it is the 4th element.
Since the maximum value of N is 30.I thought of brute forcing the result
could there be a better solution.
You can double the size of the subarray containing the maximum element in each iteration. After the first iteration, there is a subarray of size 2 containing the maximum. Then apply your operation to a subarray of size 4, containing those 2 elements, giving you a subarray of size 4 containing the maximum. Then apply to a size 8 subarray and so on. You fill the array in log2(N) operations, which is optimal. If N is 30, five operations is enough.
This is optimal in the worst case (i.e. when only one element is the maximum), since it sets the highest possible number of elements in each iteration.
Update 1: I noticed I messed up the 4s and 8s a bit. Corrected.
Update 2: here's an example. Array size 10, start state:
[6 1 5 9 3 2 0 7 4 8]
To get two nines, run op on subarray of size two containing the nine. For instance A[4…5] gets you:
[6 1 5 9 9 2 0 7 4 8]
Now run on size four subarray that contains 4…5, for instance on A[2…5] to get:
[6 9 9 9 9 2 0 7 4 8]
Now on subarray of size 8, for instance A[1…8], get:
[9 9 9 9 9 9 9 9 4 8]
Doubling now would get us 16 nines, but we have only 10 positions, so round of with A[1…10], get:
[9 9 9 9 9 9 9 9 9 9]
Update 3: since this is only optimal in the worst case, it is actually not an answer to the original question, which asks for a way of finding the minimal number of operations for all inputs. I misinterpreted the sentence about brute forcing to be about brute forcing with the median operations, rather than in finding the minimum sequence of operations.
This is the problem from codechef Long Contest.Since the contest is already over,so awkwardiom ,i am pasting the problem setter approach (Source : CC Contest Editorial Page).
"Any state of the array can be represented as a binary mask with each bit 1 means that corresponding number is equal to the max and 0 otherwise. You can run DP with state R[mask] and O(n) transits. You can proof (or just believe) that the number of statest will be not big, of course if you run good DP. The state of our DP will be the mask of numbers that are equal to max. Of course, it makes sense to use operation only for such subarray [l; r] that number of 1-bits is at least as much as number of 0-bits in submask [l; r], because otherwise nothing will change. Also you should notice that if the left bound of your operation is l it is good to make operation only with the maximal possible r (this gives number of transits equal to O(n)). It was also useful for C++ coders to use map structure to represent all states."
The C/C++ Code is::
#include <cstdio>
#include <iostream>
using namespace std;
int bc[1<<15];
const int M = (1<<15) - 1;
void setMin(int& ret, int c)
{
if(c < ret) ret = c;
}
void doit(int n, int mask, int currentSteps, int& currentBest)
{
int numMax = bc[mask>>15] + bc[mask&M];
if(numMax == n) {
setMin(currentBest, currentSteps);
return;
}
if(currentSteps + 1 >= currentBest)
return;
if(currentSteps + 2 >= currentBest)
{
if(numMax * 2 >= n) {
setMin(currentBest, 1 + currentSteps);
}
return;
}
if(numMax < (1<<currentSteps)) return;
for(int i=0;i<n;i++)
{
int a = 0, b = 0;
int c = mask;
for(int j=i;j<n;j++)
{
c |= (1<<j);
if(mask&(1<<j)) b++;
else a++;
if(b >= a) {
doit(n, c, currentSteps + 1, currentBest);
}
}
}
}
int v[32];
void solveCase() {
int n;
scanf(" %d", &n);
int maxElement = 0;
for(int i=0;i<n;i++) {
scanf(" %d", v+i);
if(v[i] > maxElement) maxElement = v[i];
}
int mask = 0;
for(int i=0;i<n;i++) if(v[i] == maxElement) mask |= (1<<i);
int ret = 0, p = 1;
while(p < n) {
ret++;
p *= 2;
}
doit(n, mask, 0, ret);
printf("%d\n",ret);
}
main() {
for(int i=0;i<(1<<15);i++) {
bc[i] = bc[i>>1] + (i&1);
}
int cases;
scanf(" %d",&cases);
while(cases--) solveCase();
}
The problem setter approach has exponential complexity. It is pretty good for N=30. But not so for larger sizes. I think, it's more interesting to find an exponential time solution. And I found one, with O(N4) complexity.
This approach uses the fact that optimal solution starts with some group of consecutive maximal elements and extends only this single group until whole array is filled with maximal values.
To prove this fact, take 2 starting groups of consecutive maximal elements and extend each of them in optimal way until they merge into one group. Suppose that group 1 needs X turns to grow to size M, group 2 needs Y turns to grow to the same size M, and on turn X + Y + 1 these groups merge. The result is a group of size at least M * 4. Now instead of turn Y for group 2, make an additional turn X + 1 for group 1. In this case group sizes are at least M * 2 and at most M / 2 (even if we count initially maximal elements, that might be included in step Y). After this change, on turn X + Y + 1 the merged group size is at least M * 4 only as a result of the first group extension, add to this at least one element from second group. So extending a single group here produces larger group in same number of steps (and if Y > 1, it even requires less steps). Since this works for equal group sizes (M), it will work even better for non-equal groups. This proof may be extended to the case of several groups (more than two).
To work with single group of consecutive maximal elements, we need to keep track of only two values: starting and ending positions of the group. Which means it is possible to use a triangular matrix to store all possible groups, allowing to use a dynamic programming algorithm.
Pseudo-code:
For each group of consecutive maximal elements in original array:
Mark corresponding element in the matrix and clear other elements
For each matrix diagonal, starting with one, containing this element:
For each marked element in this diagonal:
Retrieve current number of turns from this matrix element
(use indexes of this matrix element to initialize p1 and p2)
p2 = end of the group
p1 = start of the group
Decrease p1 while it is possible to keep median at maximum value
(now all values between p1 and p2 are assumed as maximal)
While p2 < N:
Check if number of maximal elements in the array is >= N/2
If this is true, compare current number of turns with the best result \
and update it if necessary
(additional matrix with number of maximal values between each pair of
points may be used to count elements to the left of p1 and to the
right of p2)
Look at position [p1, p2] in the matrix. Mark it and if it contains \
larger number of turns, update it
Repeat:
Increase p1 while it points to maximal value
Increment p1 (to skip one non-maximum value)
Increase p2 while it is possible to keep median at maximum value
while median is not at maximum value
To keep algorithm simple, I didn't mention special cases when group starts at position 0 or ends at position N, skipped initialization and didn't make any optimizations.

Given an array, find out the next smaller element for each element

Given an array find the next smaller element in array for each element without changing the original order of the elements.
For example, suppose the given array is 4,2,1,5,3.
The resultant array would be 2,1,-1,3,-1.
I was asked this question in an interview, but i couldn't think of a solution better than the trivial O(n^2) solution.
Any approach that I could think of, i.e. making a binary search tree, or sorting the array, will distort the original order of the elements and hence lead to a wrong result.
Any help would be highly appreciated.
O(N) Algorithm
Initialize output array to all -1s.
Create an empty stack of indexes of items we have visited in the input array but don't yet know the answer for in the output array.
Iterate over each element in the input array:
Is it smaller than the item indexed by the top of the stack?
Yes. It is the first such element to be so. Fill in the corresponding element in our output array, remove the item from the stack, and try again until the stack is empty or the answer is no.
No. Continue to 3.2.
Add this index to the stack. Continue iteration from 3.
Python implementation
def find_next_smaller_elements(xs):
ys=[-1 for x in xs]
stack=[]
for i,x in enumerate(xs):
while len(stack)>0 and x<xs[stack[-1]]:
ys[stack.pop()]=x
stack.append(i)
return ys
>>> find_next_smaller_elements([4,2,1,5,3])
[2, 1, -1, 3, -1]
>>> find_next_smaller_elements([1,2,3,4,5])
[-1, -1, -1, -1, -1]
>>> find_next_smaller_elements([5,4,3,2,1])
[4, 3, 2, 1, -1]
>>> find_next_smaller_elements([1,3,5,4,2])
[-1, 2, 4, 2, -1]
>>> find_next_smaller_elements([6,4,2])
[4, 2, -1]
Explanation
How it works
This works because whenever we add an item to the stack, we know its value is greater or equal to every element in the stack already. When we visit an element in the array, we know that if it's lower than any item in the stack, it must be lower than the last item in the stack, because the last item must be the largest. So we don't need to do any kind of search on the stack, we can just consider the last item.
Note: You can skip the initialization step so long as you add a final step to empty the stack and use each remaining index to set the corresponding output array element to -1. It's just easier in Python to initialize it to -1s when creating it.
Time complexity
This is O(N). The main loop clearly visits each index once. Each index is added to the stack exactly once and removed at most once.
Solving as an interview question
This kind of question can be pretty intimidating in an interview, but I'd like to point out that (hopefully) an interviewer isn't going to expect the solution to spring from your mind fully-formed. Talk them through your thought process. Mine went something like this:
Is there some relationship between the positions of numbers and their next smaller number in the array? Does knowing some of them constrain what the others might possibly be?
If I were in front of a whiteboard I would probably sketch out the example array and draw lines between the elements. I might also draw them as a 2D bar graph - horizontal axis being position in input array and vertical axis being value.
I had a hunch this would show a pattern, but no paper to hand. I think the diagram would make it obvious. Thinking about it carefully, I could see that the lines would not overlap arbitrarily, but would only nest.
Around this point, it occurred to me that this is incredibly similar to the algorithm Python uses internally to transform indentation into INDENT and DEDENT virtual tokens, which I'd read about before. See "How does the compiler parse the indentation?" on this page: http://www.secnetix.de/olli/Python/block_indentation.hawk However, it wasn't until I actually worked out an algorithm that I followed up on this thought and determined that it was in fact the same, so I don't think it helped too much. Still, if you can see a similarity to some other problem you know, it's probably a good idea to mention it, and say how it's similar and how it's different.
From here the general shape of the stack-based algorithm became apparent, but I still needed to think about it a bit more to be sure it would work okay for those elements that have no subsequent smaller element.
Even if you don't come up with a working algorithm, try to let your interviewer see what you're thinking about. Often it is the thought process more than the answer that they're interested in. For a tough problem, failing to find the best solution but showing insight into the problem can be better than knowing a canned answer but not being able to give it much analysis.
Start making a BST, starting from the array end. For each value 'v' answer would be the last node "Right" that you took on your way to inserting 'v', of which you can easily keep track of in recursive or iterative version.
UPDATE:
Going by your requirements, you can approach this in a linear fashion:
If every next element is smaller than the current element(e.g. 6 5 4 3 2 1) you can process this linearly without requiring any extra memory. Interesting case arises when you start getting jumbled elements(e.g. 4 2 1 5 3), in which case you need to remember their order as long as you dont' get their 'smaller counterparts'.
A simple stack based approach goes like this:
Push the first element (a[0]) in a stack.
For each next element a[i], you peek into the stack and if value ( peek() ) is greater than the one in hand a[i], you got your next smaller number for that stack element (peek()) { and keep on popping the elements as long as peek() > a[i] }. Pop them out and print/store the corresponding value.
else, simply push back your a[i] into the stack.
In the end stack 'll contain those elements which never had a value smaller than them(to their right). You can fill in -1 for them in your outpput.
e.g. A=[4, 2, 1, 5, 3];
stack: 4
a[i] = 2, Pop 4, Push 2 (you got result for 4)
stack: 2
a[i] = 1, Pop 2, Push 1 (you got result for 2)
stack: 1
a[i] = 5
stack: 1 5
a[i] = 3, Pop 5, Push 3 (you got result for 5)
stack: 1 3
1,3 don't have any counterparts for them. so store -1 for them.
Assuming you meant first next element which is lower than the current element, here are 2 solutions -
Use sqrt(N) segmentation. Divide the array in sqrt(N) segments with each segment's length being sqrt(N). For each segment calculate its' minimum element using a loop. In this way, you have pre-calculated each segments' minimum element in O(N). Now, for each element, the next lower element can be in the same segment as that one or in any of the subsequent segments. So, first check all the next elements in the current segment. If all are larger, then loop through all the subsequent segments to find out which has an element lower than current element. If you couldn't find any, result would be -1. Otherwise, check every element of that segment to find out what is the first element lower than current element. Overall, algorithm complexity is O(N*sqrt(N)) or O(N^1.5).
You can achieve O(NlgN) using a segment tree with a similar approach.
Sort the array ascending first (keeping original position of the elements as satellite data). Now, assuming each element of the array is distinct, for each element, we will need to find the lowest original position on the left side of that element. It is a classic RMQ (Range Min Query) problem and can be solved in many ways including a O(N) one. As we need to sort first, overall complexity is O(NlogN). You can learn more about RMQ in a TopCoder tutorial.
For some reasons, I find it easier to reason about "previous smaller element", aka "all nearest smaller elements". Thus applied backward gives the "next smaller".
For the record, a Python implementation in O(n) time, O(1) space (i.e. without stack), supporting negative values in the array :
def next_smaller(l):
""" Return positions of next smaller items """
res = [None] * len(l)
for i in range(len(l)-2,-1,-1):
j=i+1
while j is not None and (l[j] > l[i]):
j = res[j]
res[i] = j
return res
def next_smaller_elements(l):
""" Return next smaller items themselves """
res = next_smaller(l)
return [l[i] if i is not None else None for i in res]
Here is the javascript code . This video explains the Algo better
function findNextSmallerElem(source){
let length = source.length;
let outPut = [...Array(length)].map(() => -1);
let stack = [];
for(let i = 0 ; i < length ; i++){
let stackTopVal = stack[ stack.length - 1] && stack[ stack.length - 1].val;
// If stack is empty or current elem is greater than stack top
if(!stack.length || source[i] > stackTopVal ){
stack.push({ val: source[i], ind: i} );
} else {
// While stacktop is greater than current elem , keep popping
while( source[i] < (stack[ stack.length - 1] && stack[ stack.length - 1].val) ){
outPut[stack.pop().ind] = source[i];
}
stack.push({ val: source[i], ind: i} );
}
}
return outPut;
}
Output -
findNextSmallerElem([98,23,54,12,20,7,27])
[23, 12, 12, 7, 7, -1, -1]
Time complexity O(N), space complexity O(N).
Clean solution on java keeping order of the array:
public static int[] getNGE(int[] a) {
var s = new Stack<Pair<Integer, Integer>>();
int n = a.length;
var result = new int[n];
s.push(Pair.of(0, a[0]));
for (int i = 1; i < n; i++) {
while (!s.isEmpty() && s.peek().v2 > a[i]) {
var top = s.pop();
result[top.v1] = a[i];
}
s.push(Pair.of(i, a[i]));
}
while (!s.isEmpty()) {
var top = s.pop();
result[top.v1] = -1;
}
return result;
}
static class Pair<K, V> {
K v1;
V v2;
public static <K, V> Pair<K, V> of (K v1, V v2) {
Pair p = new Pair();
p.v1 = v1;
p.v2 = v2;
return p;
}
}
Here is an observation that I think can be made into an O(n log n) solution. Suppose you have the answer for the last k elements of the array. What would you need in order to figure out the value for the element just before this? You can think of the last k elements as being split into a series of ranges, each of which starts at some element and continues forward until it hits a smaller element. These ranges must be in descending order, so you could think about doing a binary search over them to find the first interval smaller than that element. You could then update the ranges to factor in this new element.
Now, how best to represent this? The best way I've thought of is to use a splay tree whose keys are the elements defining these ranges and whose values are the index at which they start. You can then in time O(log n) amortized do a predecessor search to find the predecessor of the current element. This finds the earliest value smaller than the current. Then, in amortized O(log n) time, insert the current element into the tree. This represents defining a new range from that element forward. To discard all ranges this supercedes, you then cut the right child of the new node, which because this is a splay tree is at the root, from the tree.
Overall, this does O(n) iterations of an O(log n) process for total O(n lg n).
Here is a O(n) algorithm using DP (actually O(2n) ):
int n = array.length();
The array min[] record the minimum number found from index i until the end of the array.
int[] min = new int[n];
min[n-1] = array[n-1];
for(int i=n-2; i>=0; i--)
min[i] = Math.min(min[i+1],array[i]);
Search and compare through the original array and min[].
int[] result = new int[n];
result[n-1] = -1;
for(int i=0; i<n-1; i++)
result[i] = min[i+1]<array[i]?min[i+1]:-1;
Here is the new solution to find "next smaller element":
int n = array.length();
int[] answer = new int[n];
answer[n-1] = -1;
for(int i=0; i<n-1; i++)
answer[i] = array[i+1]<array[i]?array[i+1]:-1;
All that is actually not required i think
case 1: a,b
answer : -a+b
case 2: a,b,c
answer : a-2b+c
case 3: a,b,c,d
answer : -a+3b-3c+d
case 4 :a,b,c,d,e
answer : a-4b+6c-4d+e
.
.
.
recognize the pattern in it?
it is the pascal's triangle!
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
so it can be calculated using Nth row of pascal's triangle!
with alternate + ans - for odd even levels!
it is O(1)
You can solve this in O(n) runtime with O(n) space complexity.
Start with a Stack and keep pushing elements till you find arr[i] such that arr[i] < stack.top element. Then store this index .
Code Snippet:
vector<int> findNext(vector<int> values) {
stack<int> st;
vector<int> nextSmall(values.size(), -1);
st.push(0);
for (int i = 1; i < values.size(); i++) {
while (!st.empty() && values[i] < values[st.top()]) {
// change values[i] < values[st.top()] to values[i] > values[st.top()] to find the next greater element.
nextSmall[st.top()] = i;
st.pop();
}
st.push(i);
}
return nextSmall;
}
Solution with O(1) space complexity and O(n) time complexity.
void replace_next_smallest(int a[], int n)
{
int ns = a[n - 1];
for (int i = n - 1; i >= 0; i--) {
if (i == n - 1) {
a[i] = -1;
}
else if (a[i] > ns) {
int t = ns;
ns = a[i];
a[i] = t;
}
else if (a[i] == ns) {
a[i] = a[i + 1];
}
else {
ns = a[i];
a[i] = -1;
}
}
}
Solution With O(n) Time Complexity and O(1) Space Complexity. This Solution is not complex to understand and implemented without stack.
def min_secMin(a,n):
min = a[0]
sec_min = a[1]
for i in range(1,n):
if(a[i]<min):
sec_min = min
min = a[i]
if(a[i]>min and a[i]<sec_min):
sec_min = a[i]
return min,sec_min
Given an array find the next smaller element in array for each element without changing the original order of the elements.
where arr is the array and n is length of the array..
Using Python logic,
def next_smallest_array(arr,n):
for i in range(0,n-1,1):
if arr[i]>arr[i+1]:
arr[i]=arr[i+1]
else:
arr[i]=-1
arr[n-1]=-1
return arr
Find_next_smaller_elements([4,2,1,5,3])
Output is [2, 1, -1, 3, -1]
Find_next_smaller_elements([1,2,3,4,5])
Output is [-1, -1, -1, -1, -1]

How to tell if an array is a permutation in O(n)?

Input: A read-only array of N elements containing integer values from 1 to N (some integer values can appear more than once!). And a memory zone of a fixed size (10, 100, 1000 etc - not depending on N).
How to tell in O(n) if the array represents a permutation?
--What I achieved so far (an answer proved that this was not good):--
I use the limited memory area to store the sum and the product of the array.
I compare the sum with N*(N+1)/2 and the product with N!
I know that if condition (2) is true I might have a permutation. I'm wondering if there's a way to prove that condition (2) is sufficient to tell if I have a permutation. So far I haven't figured this out ...
I'm very slightly skeptical that there is a solution. Your problem seems to be very close to one posed several years ago in the mathematical literature, with a summary given here ("The Duplicate Detection Problem", S. Kamal Abdali, 2003) that uses cycle-detection -- the idea being the following:
If there is a duplicate, there exists a number j between 1 and N such that the following would lead to an infinite loop:
x := j;
do
{
x := a[x];
}
while (x != j);
because a permutation consists of one or more subsets S of distinct elements s0, s1, ... sk-1 where sj = a[sj-1] for all j between 1 and k-1, and s0 = a[sk-1], so all elements are involved in cycles -- one of the duplicates would not be part of such a subset.
e.g. if the array = [2, 1, 4, 6, 8, 7, 9, 3, 8]
then the element in bold at position 5 is a duplicate because all the other elements form cycles: { 2 -> 1, 4 -> 6 -> 7 -> 9 -> 8 -> 3}. Whereas the arrays [2, 1, 4, 6, 5, 7, 9, 3, 8] and [2, 1, 4, 6, 3, 7, 9, 5, 8] are valid permutations (with cycles { 2 -> 1, 4 -> 6 -> 7 -> 9 -> 8 -> 3, 5 } and { 2 -> 1, 4 -> 6 -> 7 -> 9 -> 8 -> 5 -> 3 } respectively).
Abdali goes into a way of finding duplicates. Basically the following algorithm (using Floyd's cycle-finding algorithm) works if you happen across one of the duplicates in question:
function is_duplicate(a, N, j)
{
/* assume we've already scanned the array to make sure all elements
are integers between 1 and N */
x1 := j;
x2 := j;
do
{
x1 := a[x1];
x2 := a[x2];
x2 := a[x2];
} while (x1 != x2);
/* stops when it finds a cycle; x2 has gone around it twice,
x1 has gone around it once.
If j is part of that cycle, both will be equal to j. */
return (x1 != j);
}
The difficulty is I'm not sure your problem as stated matches the one in his paper, and I'm also not sure if the method he describes runs in O(N) or uses a fixed amount of space. A potential counterexample is the following array:
[3, 4, 5, 6, 7, 8, 9, 10, ... N-10, N-9, N-8, N-7, N-2, N-5, N-5, N-3, N-5, N-1, N, 1, 2]
which is basically the identity permutation shifted by 2, with the elements [N-6, N-4, and N-2] replaced by [N-2, N-5, N-5]. This has the correct sum (not the correct product, but I reject taking the product as a possible detection method since the space requirements for computing N! with arbitrary precision arithmetic are O(N) which violates the spirit of the "fixed memory space" requirement), and if you try to find cycles, you will get cycles { 3 -> 5 -> 7 -> 9 -> ... N-7 -> N-5 -> N-1 } and { 4 -> 6 -> 8 -> ... N-10 -> N-8 -> N-2 -> N -> 2}. The problem is that there could be up to N cycles, (identity permutation has N cycles) each taking up to O(N) to find a duplicate, and you have to keep track somehow of which cycles have been traced and which have not. I'm skeptical that it is possible to do this in a fixed amount of space. But maybe it is.
This is a heavy enough problem that it's worth asking on mathoverflow.net (despite the fact that most of the time mathoverflow.net is cited on stackoverflow it's for problems which are too easy)
edit: I did ask on mathoverflow, there's some interesting discussion there.
This is impossible to do in O(1) space, at least with a single-scan algorithm.
Proof
Suppose you have processed N/2 of the N elements. Assuming the sequence is a permutation then, given the state of the algorithm, you should be able to figure out the set of N/2 remaining elements. If you can't figure out the remaining elements, then the algorithm can be fooled by repeating some of the old elements.
There are N choose N/2 possible remaining sets. Each of them must be represented by a distinct internal state of the algorithm, because otherwise you couldn't figure out the remaining elements. However, it takes logarithmic space to store X states, so it takes BigTheta(log(N choose N/2)) space to store N choose N/2 states. That values grows with N, and therefore the algorithm's internal state can not fit in O(1) space.
More Formal Proof
You want to create a program P which, given the final N/2 elements and the internal state of the linear-time-constant-space algorithm after it has processed N/2 elements, determines if the entire sequence is a permutation of 1..N. There is no time or space bound on this secondary program.
Assuming P exists we can create a program Q, taking only the internal state of the linear-time-constant-space algorithm, which determines the necessary final N/2 elements of the sequence (if it was a permutation). Q works by passing P every possible final N/2 elements and returning the set for which P returns true.
However, because Q has N choose N/2 possible outputs, it must have at least N choose N/2 possible inputs. That means the internal state of the original algorithm must store at least N choose N/2 states, requiring BigTheta(log N choose N/2), which is greater than constant size.
Therefore the original algorithm, which does have time and space bounds, also can't work correctly if it has constant-size internal state.
[I think this idea can be generalized, but thinking isn't proving.]
Consequences
BigTheta(log(N choose N/2)) is equal to BigTheta(N). Therefore just using a boolean array and ticking values as you encounter them is (probably) space-optimal, and time-optimal too since it takes linear time.
I doubt you would be able to prove that ;)
(1, 2, 4, 4, 4, 5, 7, 9, 9)
I think that more generally, this problem isn't solvable by processing the numbers in order. Suppose you are processing the elements in order and you are halfway the array. Now the state of your program has to somehow reflect which numbers you've encountered so far. This requires at least O(n) bits to store.
This isn't going to work due to the complexity being given as a function of N rather than M, implying that N >> M
This was my shot at it, but for a bloom filter to be useful, you need a big M, at which point you may as well use simple bit toggling for something like integers
http://en.wikipedia.org/wiki/Bloom_filter
For each element in the array
Run the k hash functions
Check for inclusion in the bloom filter
If it is there, there is a probability you've seen the element before
If it isn't, add it
When you are done, you may as well compare it to the results of a 1..N array in order, as that'll only cost you another N.
Now if I haven't put enough caveats in. It isn't 100%, or even close since you specified complexity in N, which implies that N >> M, so fundamentally it won't work as you have specified it.
BTW, the false positive rate for an individual item should be
e = 2^(-m/(n*sqrt(2)))
Which monkeying around with will give you an idea how big M would need to be to be acceptable.
I don't know how to do it in O(N), or even if it can be done in O(N). I know that it can be done in O(N log N) if you (use an appropriate) sort and compare.
That being said, there are many O(N) techniques that can be done to show that one is NOT a permutation of the other.
Check the length. If unequal, obviously not a permutation.
Create an XOR fingerprint. If the value of all the elements XOR'ed together does not match, then it can not be a permutation. A match would however be inconclusive.
Find the sum of all elements. Although the result may overflow, that should not be a worry when matching this 'fingerprint'. If however, you did a checksum that involved multiplying then overflow would be an issue.
Hope this helps.
You might be able to do this in randomized O(n) time and constant space by computing sum(x_i) and product(x_i) modulo a bunch of different randomly chosen constants C of size O(n). This basically gets you around the problem that product(x_i) gets too large.
There's still a lot of open questions, though, like if sum(x_i)=N(N+1)/2 and product(x_i)=N! are sufficient conditions to guarantee a permutation, and what is the chance that a non-permutation generates a false positive (I would hope ~1/C for each C you try, but maybe not).
it's a permutation if and only if there are no duplicate values in the array, should be easy to check that in O(N)
Depending on how much space you have, relative to N, you might try using hashing and buckets.
That is, iterate over the entire list, hash each element, and store it in a bucket. You'll need to find a way to reduce bucket collisions from the hashes, but that is a solved problem.
If an element tries to go into a bucket with an item identical to it, it is a permutation.
This type of solution would be O(N) as you touch each element only once.
However, the problem with this is whether space M is larger than N or not. If M > N, this solution will be fine, but if M < N, then you will not be able to solve the problem with 100% accuracy.
First, an information theoretic reason why this may be possible. We can trivially check that the numbers in the array are in bounds in O(N) time and O(1) space. To specify any such array of in-bounds numbers requires N log N bits of information. But to specify a permutation requires approximately (N log N) - N bits of information (Stirling's approximation). Thus, if we could acquire N bits of information during testing, we might be able to know the answer. This is trivial to do in N time (in fact, with M static space we can pretty easily acquire log M information per step, and under special circumstances we can acquire log N information).
On the other hand, we only get to store something like M log N bits of information in our static storage space, which is presumably much less than N, so it depends greatly what the shape of the decision surface is between "permutation" and "not".
I think that this is almost possible but not quite given the problem setup. I think one is "supposed" to use the cycling trick (as in the link that Iulian mentioned), but the key assumption of having a tail in hand fails here because you can index the last element of the array with a permutation.
The sum and the product will not guarantee the correct answer, since these hashes are subject to collisions, i.e. different inputs might potentially produce identical results. If you want a perfect hash, a single-number result that actually fully describes the numerical composition of the array, it might be the following.
Imagine that for any number i in [1, N] range you can produce a unique prime number P(i) (for example, P(i) is the i-th prime number). Now all you need to do is calculate the product of all P(i) for all numbers in your array. The product will fully and unambiguously describe the composition of your array, disregarding the ordering of values in it. All you need to do is to precalculate the "perfect" value (for a permutation) and compare it with the result for a given input :)
Of course, the algorithm like this does not immediately satisfy the posted requirements. But at the same time it is intuitively too generic: it allows you to detect a permutation of absolutely any numerical combination in an array. In your case you need to detect a permutation of a specific combination 1, 2, ..., N. Maybe this can somehow be used to simplify things... Probably not.
Alright, this is different, but it appears to work!
I ran this test program (C#):
static void Main(string[] args) {
for (int j = 3; j < 100; j++) {
int x = 0;
for (int i = 1; i <= j; i++) {
x ^= i;
}
Console.WriteLine("j: " + j + "\tx: " + x + "\tj%4: " + (j % 4));
}
}
Short explanation: x is the result of all the XORs for a single list, i is the element in a particular list, and j is the size of the list. Since all I'm doing is XOR, the order of the elements don't matter. But I'm looking at what correct permutations look like when this is applied.
If you look at j%4, you can do a switch on that value and get something like this:
bool IsPermutation = false;
switch (j % 4) {
case 0:
IsPermutation = (x == j);
break;
case 1:
IsPermutation = (x == 1);
break;
case 2:
IsPermutation = (x == j + 1);
break;
case 3:
IsPermutation = (x == 0);
break;
}
Now I acknowledge that this probably requires some fine tuning. It's not 100%, but it's a good easy way to get started. Maybe with some small checks running throughout the XOR loop, this could be perfected. Try starting somewhere around there.
it looks like asking to find duplicate in array with stack machine.
it sounds impossible to know the full history of the stack , while you extract each number and have limited knowledge of the numbers that were taken out.
Here's proof it can't be done:
Suppose by some artifice you have detected no duplicates in all but the last cell. Then the problem reduces to checking if that last cell contains a duplicate.
If you have no structured representation of the problem state so far, then you are reduced to performing a linear search over the entire previous input, for EACH cell. It's easy to see how this leaves you with a quadratic-time algorithm.
Now, suppose through some clever data structure that you actually know which number you expect to see last. Then certainly that knowledge takes at least enough bits to store the number you seek -- perhaps one memory cell? But there is a second-to-last number and a second-to-last sub-problem: then you must similarly represent a set of two possible numbers yet-to-be-seen. This certainly requires more storage than encoding only for one remaining number. By a progression of similar arguments, the size of the state must grow with the size of the problem, unless you're willing to accept a quadratic-time worst-case.
This is the time-space trade-off. You can have quadratic time and constant space, or linear time and linear space. You cannot have linear time and constant space.
Check out the following solution. It uses O(1) additional space.
It alters the array during the checking process, but returns it back to its initial state at the end.
The idea is:
Check if any of the elements is out of the range [1, n] => O(n).
Go over the numbers in order (all of them are now assured to be in the range [1, n]), and for each number x (e.g. 3):
go to the x'th cell (e.g. a[3]), if it's negative, then someone already visited it before you => Not permutation. Otherwise (a[3] is positive), multiply it by -1.
=> O(n).
Go over the array and negate all negative numbers.
This way, we know for sure that all elements are in the range [1, n], and that there are no duplicates => The array is a permutation.
int is_permutation_linear(int a[], int n) {
int i, is_permutation = 1;
// Step 1.
for (i = 0; i < n; ++i) {
if (a[i] < 1 || a[i] > n) {
return 0;
}
}
// Step 2.
for (i = 0; i < n; ++i) {
if (a[abs(a[i]) - 1] < 0) {
is_permutation = 0;
break;
}
a[i] *= -1;
}
// Step 3.
for (i = 0; i < n; ++i) {
if (a[i] < 0) {
a[i] *= -1;
}
}
return is_permutation;
}
Here is the complete program that tests it:
/*
* is_permutation_linear.c
*
* Created on: Dec 27, 2011
* Author: Anis
*/
#include <stdio.h>
int abs(int x) {
return x >= 0 ? x : -x;
}
int is_permutation_linear(int a[], int n) {
int i, is_permutation = 1;
for (i = 0; i < n; ++i) {
if (a[i] < 1 || a[i] > n) {
return 0;
}
}
for (i = 0; i < n; ++i) {
if (a[abs(a[i]) - 1] < 0) {
is_permutation = 0;
break;
}
a[abs(a[i]) - 1] *= -1;
}
for (i = 0; i < n; ++i) {
if (a[i] < 0) {
a[i] *= -1;
}
}
return is_permutation;
}
void print_array(int a[], int n) {
int i;
for (i = 0; i < n; i++) {
printf("%2d ", a[i]);
}
}
int main() {
int arrays[9][8] = { { 1, 2, 3, 4, 5, 6, 7, 8 },
{ 8, 6, 7, 2, 5, 4, 1, 3 },
{ 0, 1, 2, 3, 4, 5, 6, 7 },
{ 1, 1, 2, 3, 4, 5, 6, 7 },
{ 8, 7, 6, 5, 4, 3, 2, 1 },
{ 3, 5, 1, 6, 8, 4, 7, 2 },
{ 8, 3, 2, 1, 4, 5, 6, 7 },
{ 1, 1, 1, 1, 1, 1, 1, 1 },
{ 1, 8, 4, 2, 1, 3, 5, 6 } };
int i;
for (i = 0; i < 9; i++) {
printf("array: ");
print_array(arrays[i], 8);
printf("is %spermutation.\n",
is_permutation_linear(arrays[i], 8) ? "" : "not ");
printf("after: ");
print_array(arrays[i], 8);
printf("\n\n");
}
return 0;
}
And its output:
array: 1 2 3 4 5 6 7 8 is permutation.
after: 1 2 3 4 5 6 7 8
array: 8 6 7 2 5 4 1 3 is permutation.
after: 8 6 7 2 5 4 1 3
array: 0 1 2 3 4 5 6 7 is not permutation.
after: 0 1 2 3 4 5 6 7
array: 1 1 2 3 4 5 6 7 is not permutation.
after: 1 1 2 3 4 5 6 7
array: 8 7 6 5 4 3 2 1 is permutation.
after: 8 7 6 5 4 3 2 1
array: 3 5 1 6 8 4 7 2 is permutation.
after: 3 5 1 6 8 4 7 2
array: 8 3 2 1 4 5 6 7 is permutation.
after: 8 3 2 1 4 5 6 7
array: 1 1 1 1 1 1 1 1 is not permutation.
after: 1 1 1 1 1 1 1 1
array: 1 8 4 2 1 3 5 6 is not permutation.
after: 1 8 4 2 1 3 5 6
Java solution below answers question partly. Time complexity I believe is O(n). (This belief based on the fact that solution doesn't contains nested loops.) About memory -- not sure. Question appears first on relevant requests in google, so it probably can be useful for somebody.
public static boolean isPermutation(int[] array) {
boolean result = true;
array = removeDuplicates(array);
int startValue = 1;
for (int i = 0; i < array.length; i++) {
if (startValue + i != array[i]){
return false;
}
}
return result;
}
public static int[] removeDuplicates(int[] input){
Arrays.sort(input);
List<Integer> result = new ArrayList<Integer>();
int current = input[0];
boolean found = false;
for (int i = 0; i < input.length; i++) {
if (current == input[i] && !found) {
found = true;
} else if (current != input[i]) {
result.add(current);
current = input[i];
found = false;
}
}
result.add(current);
int[] array = new int[result.size()];
for (int i = 0; i < array.length ; i ++){
array[i] = result.get(i);
}
return array;
}
public static void main (String ... args){
int[] input = new int[] { 4,2,3,4,1};
System.out.println(isPermutation(input));
//output true
input = new int[] { 4,2,4,1};
System.out.println(isPermutation(input));
//output false
}
int solution(int A[], int N) {
int i,j,count=0, d=0, temp=0,max;
for(i=0;i<N-1;i++) {
for(j=0;j<N-i-1;j++) {
if(A[j]>A[j+1]) {
temp = A[j+1];
A[j+1] = A[j];
A[j] = temp;
}
}
}
max = A[N-1];
for(i=N-1;i>=0;i--) {
if(A[i]==max) {
count++;
}
else {
d++;
}
max = max-1;
}
if(d!=0) {
return 0;
}
else {
return 1;
}
}

Resources