Custom rowSums of a Matrix in Chapel - sparse-matrix

Following up on this question. I have a Matrix (yes, I do) which will be large and sparse.
A = [
[0, 0, 0, 1.2, 0]
[0, 0, 0, 0, 0]
[3.5, 0, 0, 0, 0]
[0 7, 0, 0, 0]
]
And I want to create a vector v that has the sum v[j] = v[j,] * log(v[j,]) for each row in A
I believe there is an iterator like [x * log(x) for x in row] do... but I'm having a hard time finding the syntax. One particular bugaboo is to avoid taking log(0), so maybe an if statement in the iterator?

I believe there is an iterator like [x * log(x) for x in row] do... but I'm having a hard time finding the syntax.
Instead of creating an iterator, we can create a function to compute x * log(x) and just pass an array (or array slice) to it, allowing promotion to take care of the rest.
Instead of doing a + reduce over the array slices like we did before,
forall i in indices {
rowsums[i] = + reduce(A[i, ..]);
}
We can do a + reduce over a promoted operation on the array slices, like this:
forall i in indices {
rowsums[i] = + reduce(logProduct(A[i, ..]));
}
where logProduct(x) can include an if-statement handling the special case of 0, as you mentioned above.
Putting this all together looks something like this:
config const n = 10;
proc main() {
const indices = 1..n;
const Adom = {indices, indices};
var A: [Adom] real;
populate(A);
var v = rowSums(A);
writeln('matrix:');
writeln(A);
writeln('row sums:');
writeln(v);
}
/* Populate A, leaving many zeros */
proc populate(A: [?Adom]) {
forall i in Adom.dim(1) by 2 do // every other row
forall j in Adom.dim(2) by 3 do // every third column
A[i, j] = i*j;
}
/* Compute row sums with custom function (logProduct) */
proc rowSums(A: [?Adom] ?t) {
var v: [Adom.dim(1)] t;
[i in v.domain] v[i] = + reduce(logProduct(A[i, ..]));
return v;
}
/* Custom function to handle log(0) case */
proc logProduct(x: real) {
if x == 0 then return 0;
return x * log(x);
}

Related

Sum of distance between every pair of same element in an array

I have an array [a0,a1,...., an] I want to calculate the sum of the distance between every pair of the same element.
1)First element of array will always be zero.
2)Second element of array will be greater than zero.
3) No two consecutive elements can be same.
4) Size of array can be upto 10^5+1 and elements of array can be from 0 to 10^7
For example, if array is [0,2,5 ,0,5,7,0] then distance between first 0 and second 0 is 2*. distance between first 0 and third 0 is 5* and distance between second 0 and third 0 is 2*. distance between first 5 and second 5 is 1*. Hence sum of distances between same element is 2* + 5* + 2* + 1* = 10;
For this I tried to build a formula:- for every element having occurence more than 1 (0 based indexing and first element is always zero)--> sum = sum + (lastIndex - firstIndex - 1) * (NumberOfOccurence - 1)
if occurence of element is odd subtract -1 from sum else leave as it is. But this approach is not working in every case.
,,But this approach works if array is [0,5,7,0] or if array is [0,2,5,0,5,7,0,1,2,3,0]
Can you suggest another efficient approach or formula?
Edit :- This problem is not a part of any coding contest, it's just a little part of a bigger problem
My method requires space that scales with the number of possible values for elements, but has O(n) time complexity.
I've made no effort to check that the sum doesn't overflow an unsigned long, I just assume that it won't. Same for checking that any input values are in fact no more than max_val. These are details that would have to be addressed.
For each possible value, it keeps track of how much would be added to the sum if one of that element is encountered in total_distance. In instances_so_far, it keeps track of how many instances of a value have already been seen. This is how much would be added to total_distance each step. To make this more efficient, the last index at which a value was encountered is tracked, such that total_distance need only be added to when that particular value is encountered, instead of having nested loops that add every value at every step.
#include <stdio.h>
#include <stddef.h>
// const size_t max_val = 15;
const size_t max_val = 10000000;
unsigned long instances_so_far[max_val + 1] = {0};
unsigned long total_distance[max_val + 1] = {0};
unsigned long last_index_encountered[max_val + 1];
// void print_array(unsigned long *array, size_t len) {
// printf("{");
// for (size_t i = 0; i < len; ++i) {
// printf("%lu,", array[i]);
// }
// printf("}\n");
// }
unsigned long get_sum(unsigned long *array, size_t len) {
unsigned long sum = 0;
for (size_t i = 0; i < len; ++i) {
if (instances_so_far[array[i]] >= 1) {
total_distance[array[i]] += (i - last_index_encountered[array[i]]) * instances_so_far[array[i]] - 1;
}
sum += total_distance[array[i]];
instances_so_far[array[i]] += 1;
last_index_encountered[array[i]] = i;
// printf("inst ");
// print_array(instances_so_far, max_val + 1);
// printf("totd ");
// print_array(total_distance, max_val + 1);
// printf("encn ");
// print_array(last_index_encountered, max_val + 1);
// printf("sums %lu\n", sum);
// printf("\n");
}
return sum;
}
unsigned long test[] = {0,1,0,2,0,3,0,4,5,6,7,8,9,10,0};
int main(void) {
printf("%lu\n", get_sum(test, sizeof(test) / sizeof(test[0])));
return 0;
}
I've tested it with a few of the examples here, and gotten the answers I expected.
I had to use static storage for the arrays because they overflowed the stack if put there.
I've left in the commented-out code I used for debugging, it's helpful to understand what's going on, if you reduce max_val to a smaller number.
Please let me know if you find a counter-example that fails.
Here is Python 3 code for your problem. This works on all the examples given in your question and in the comments--I included the test code.
This works by looking at how each consecutive pair of repeated elements adds to the overall sum of distances. If the list has 6 elements, the pair distances are:
x x x x x x The repeated element's locations in the array
-- First, consecutive pairs
--
--
--
--
----- Now, pairs that have one element inside
-----
-----
-----
-------- Now, pairs that have two elements inside
--------
--------
----------- Now, pairs that have three elements inside
-----------
-------------- Now, pairs that have four elements inside
If we look down between each consecutive pair, we see that it adds to the overall sum of all pairs:
5 8 9 8 5
And if we look at the differences between those values we get
3 1 -1 -3
Now if we use my preferred definition of "distance" for a pairs, namely the difference of their indices, we can use those multiplicities for consecutive pairs to calculate the overall sum of distances for all pairs. But since your definition is not mine, we calculate the sum for my definition then adjust it for your definition.
This code makes one pass through the original array to get the occurrences for each element value in the array, then another pass through those distinct element values. (I used the pairwise routine to avoid another pass through the array.) That makes my algorithm O(n) in time complexity, where n is the length of the array. This is much better than the naive O(n^2). Since my code builds an array of the repeated elements, once per unique element value, this has space complexity of at worst O(n).
import collections
import itertools
def pairwise(iterable):
"""s -> (s0,s1), (s1,s2), (s2, s3), ..."""
a, b = itertools.tee(iterable)
next(b, None)
return zip(a, b)
def sum_distances_of_pairs(alist):
# Make a dictionary giving the indices for each element of the list.
element_ndxs = collections.defaultdict(list)
for ndx, element in enumerate(alist):
element_ndxs[element].append(ndx)
# Sum the distances of pairs for each element, using my def of distance
sum_of_all_pair_distances = 0
for element, ndx_list in element_ndxs.items():
# Filter out elements not occurring more than once and count the rest
if len(ndx_list) < 2:
continue
# Sum the distances of pairs for this element, using my def of distance
sum_of_pair_distances = 0
multiplicity = len(ndx_list) - 1
delta_multiplicity = multiplicity - 2
for ndx1, ndx2 in pairwise(ndx_list):
# Update the contribution of this consecutive pair to the sum
sum_of_pair_distances += multiplicity * (ndx2 - ndx1)
# Prepare for the next consecutive pair
multiplicity += delta_multiplicity
delta_multiplicity -= 2
# Adjust that sum of distances for the desired definition of distance
cnt_all_pairs = len(ndx_list) * (len(ndx_list) - 1) // 2
sum_of_pair_distances -= cnt_all_pairs
# Add that sum for this element into the overall sum
sum_of_all_pair_distances += sum_of_pair_distances
return sum_of_all_pair_distances
assert sum_distances_of_pairs([0, 2, 5, 0, 5, 7, 0]) == 10
assert sum_distances_of_pairs([0, 5, 7, 0]) == 2
assert sum_distances_of_pairs([0, 2, 5, 0, 5, 7, 0, 1, 2, 3, 0]) == 34
assert sum_distances_of_pairs([0, 0, 0, 0, 1, 2, 0]) == 18
assert sum_distances_of_pairs([0, 1, 0, 2, 0, 3, 4, 5, 6, 7, 8, 9, 0, 10, 0]) == 66
assert sum_distances_of_pairs([0, 1, 0, 2, 0, 3, 0, 4, 5, 6, 7, 8, 9, 10, 0]) == 54

Insert a smallest possible positive integer into an array of unique integers [duplicate]

This question already has answers here:
Find the Smallest Integer Not in a List
(28 answers)
Closed 3 years ago.
I am trying to tackle this interview question: given an array of unique positive integers, find the smallest possible number to insert into it so that every integer is still unique. The algorithm should be in O(n) and the additional space complexity should be constant. Assigning values in the array to other integers is allowed.
For example, for an array [5, 3, 2, 7], output should be 1. However for [5, 3, 2, 7, 1], the answer should then be 4.
My first idea is to sort the array, then go through the array again to find where the continuous sequence breaks, but sorting needs more than O(n).
Any ideas would be appreciated!
My attempt:
The array A is assumed 1-indexed. We call an active value one that is nonzero and does not exceed n.
Scan the array until you find an active value, let A[i] = k (if you can't find one, stop);
While A[k] is active,
Move A[k] to k while clearing A[k];
Continue from i until you reach the end of the array.
After this pass, all array entries corresponding to some integer in the array are cleared.
Find the first nonzero entry, and report its index.
E.g.
[5, 3, 2, 7], clear A[3]
[5, 3, 0, 7], clear A[2]
[5, 0, 0, 7], done
The answer is 1.
E.g.
[5, 3, 2, 7, 1], clear A[5],
[5, 3, 2, 7, 0], clear A[1]
[0, 3, 2, 7, 0], clear A[3],
[0, 3, 0, 7, 0], clear A[2],
[0, 0, 0, 7, 0], done
The answer is 4.
The behavior of the first pass is linear because every number is looked at once (and immediately cleared), and i increases regularly.
The second pass is a linear search.
A= [5, 3, 2, 7, 1]
N= len(A)
print(A)
for i in range(N):
k= A[i]
while k > 0 and k <= N:
A[k-1], k = 0, A[k-1] # -1 for 0-based indexing
print(A)
[5, 3, 2, 7, 1]
[5, 3, 2, 7, 0]
[0, 3, 2, 7, 0]
[0, 3, 2, 7, 0]
[0, 3, 0, 7, 0]
[0, 0, 0, 7, 0]
[0, 0, 0, 7, 0]
Update:
Based on גלעד ברקן's idea, we can mark the array elements in a way that does not destroy the values. Then you report the index of the first unmarked.
print(A)
for a in A:
a= abs(a)
if a <= N:
A[a-1]= - A[a-1] # -1 for 0-based indexing
print(A)
[5, 3, 2, 7, 1]
[5, 3, 2, 7, -1]
[5, 3, -2, 7, -1]
[5, -3, -2, 7, -1]
[5, -3, -2, 7, -1]
[-5, -3, -2, 7, -1]
From the question description: "Assigning values in the array to other integers is allowed." This is O(n) space, not constant.
Loop over the array and multiply A[ |A[i]| - 1 ] by -1 for |A[i]| < array length. Loop a second time and output (the index + 1) for the first cell not negative or (array length + 1) if they are all marked. This takes advantage of the fact that there could not be more than (array length) unique integers in the array.
I will use 1-based indexing.
The idea is to reuse input collection and arrange to swap integer i at ith place if its current position is larger than i. This can be performed in O(n).
Then on second iteration, you find the first index i not containing i, which is again O(n).
In Smalltalk, implemented in Array (self is the array):
firstMissing
self size to: 1 by: -1 do: [:i |
[(self at: i) < i] whileTrue: [self swap: i with: (self at: i)]].
1 to: self size do: [:i |
(self at: i) = i ifFalse: [^i]].
^self size + 1
So we have two loops in O(n), but we also have another loop inside the first loop (whileTrue:). So is the first loop really O(n)?
Yes, because each element will be swapped at most once, since they will arrive at their right place. We see that the cumulated number of swap is bounded by array size, and the overall cost of first loop is at most 2*n, the total cost incuding last seatch is at most 3*n, still O(n).
You also see that we don't care to swap case of (self at: i) > i and: [(self at:i) <= self size], why? Because we are sure that there will be a smaller missing element in this case.
A small test case:
| trial |
trial := (1 to: 100100) asArray shuffled first: 100000.
self assert: trial copy firstMissing = trial sorted firstMissing.
You could do the following.
Find the maximum (m), sum of all elements (s), number of elements (n)
There are m-n elements missing, their sum is q = sum(1..m) - s - there is a closed-form solution for the sum
If you are missing only one integer, you're done - report q
If you are missing more than one (m-n), you realize that the sum of the missing integers is q, and at least one of them will be smaller than q/(m-n)
You start from the top, except you will only take into account integers smaller than q/(m-n) - this will be the new m, only elements below that maximum contribute to the new s and n. Do this until you are left with only one missing integer.
Still, this may not be linear time, I'm not sure.
EDIT: you should use the candidate plus half the input size as a pivot to reduce the constant factor here – see Daniel Schepler’s comment – but I haven’t had time to get it working in the example code yet.
This isn’t optimal – there’s a clever solution being looked for – but it’s enough to meet the criteria :)
Define the smallest possible candidate so far: 1.
If the size of the input is 0, the smallest possible candidate is a valid candidate, so return it.
Partition the input into < pivot and > pivot (with median of medians pivot, like in quicksort).
If the size of ≤ pivot is less than pivot itself, there’s a free value in there, so start over at step 2 considering only the < pivot partition.
Otherwise (when it’s = pivot), the new smallest possible candidate is the pivot + 1. Start over at step 2 considering only the > pivot partition.
I think that works…?
'use strict';
const swap = (arr, i, j) => {
[arr[i], arr[j]] = [arr[j], arr[i]];
};
// dummy pivot selection, because this part isn’t important
const selectPivot = (arr, start, end) =>
start + Math.floor(Math.random() * (end - start));
const partition = (arr, start, end) => {
let mid = selectPivot(arr, start, end);
const pivot = arr[mid];
swap(arr, mid, start);
mid = start;
for (let i = start + 1; i < end; i++) {
if (arr[i] < pivot) {
mid++;
swap(arr, i, mid);
}
}
swap(arr, mid, start);
return mid;
};
const findMissing = arr => {
let candidate = 1;
let start = 0;
let end = arr.length;
for (;;) {
if (start === end) {
return candidate;
}
const pivotIndex = partition(arr, start, end);
const pivot = arr[pivotIndex];
if (pivotIndex + 1 < pivot) {
end = pivotIndex;
} else {
//assert(pivotIndex + 1 === pivot);
candidate = pivot + 1;
start = pivotIndex + 1;
}
}
};
const createTestCase = (size, max) => {
if (max < size) {
throw new Error('size must be < max');
}
const arr = Array.from({length: max}, (_, i) => i + 1);
const expectedIndex = Math.floor(Math.random() * size);
arr.splice(expectedIndex, 1 + Math.floor(Math.random() * (max - size - 1)));
for (let i = 0; i < size; i++) {
let j = i + Math.floor(Math.random() * (size - i));
swap(arr, i, j);
}
return {
input: arr.slice(0, size),
expected: expectedIndex + 1,
};
};
for (let i = 0; i < 5; i++) {
const test = createTestCase(1000, 1024);
console.log(findMissing(test.input), test.expected);
}
The correct method I almost got on my own, but I had to search for it, and I found it here: https://www.geeksforgeeks.org/find-the-smallest-positive-number-missing-from-an-unsorted-array/
Note: This method is destructive to the original data
Nothing in the original question said you could not be destructive.
I will explain what you need to do now.
The basic "aha" here is that the first missing number must come within the first N positive numbers, where N is the length of the array.
Once you understand this and realize you can use the values in the array itself as markers, you just have one problem you need to address: Does the array have numbers less than 1 in it? If so we need to deal with them.
Dealing with 0s or negative numbers can be done in O(n) time. Get two integers, one for our current value, and one for the end of the array. As we scan through, if we find a 0 or negative number, we perform a swap using the third integer, with the final value in the array. Then we decrement our end of an array pointer. We continue until our current pointer is past the end of the array pointer.
Code example:
while (list[end] < 1) {
end--;
}
while (cur< end) {
if (n < 1) {
swap(list[cur], list[end]);
while (list[end] < 1) {
end--;
}
}
}
Now we have the end of the array, and a truncated array. From here we need to see how we can use the array itself. Since all numbers that we care about are positive, and we have a pointer to the position of how many of them there are, we can simply multiply one by -1 to mark that place as present if there was a number in the array there.
e.g. [5, 3, 2, 7, 1] when we read 3, we change it to [5, 3, -2, 7, 1]
Code example:
for (cur = 0; cur <= end; begin++) {
if (!(abs(list[cur]) > end)) {
list[abs(list[cur]) - 1] *= -1;
}
}
Now, note: You need to read the absolute value of the integer in the position because it might be changed to be negative. Also note, if an integer is greater than your end of list pointer, do not change anything as that integer will not matter.
Finally, once you have read all the positive values, iterate through them to find the first one that is currently positive. This place represents your first missing number.
Step 1: Segregate 0 and negative numbers from your list to the right. O(n)
Step 2: Using the end of list pointer iterate through the entire list marking
relevant positions negative. O(n-k)
Step 3: Scan the numbers for the position of the first non-negative number. O(n-k)
Space Complexity: The original list is not counted, I used 3 integers beyond that. So
it is O(1)
One thing I should mention is the list [5, 4, 2, 1, 3] would end up [-5, -4, -2, -1, -3] so in this case, you would choose the first number after the end position of the list, or 6 as your result.
Code example for step 3:
for (cur = 0; cur < end; cur++) {
if (list[cur] > 0) {
break;
}
}
print(cur);
use this short and sweet algorithm:
A is [5, 3, 2, 7]
1- Define B With Length = A.Length; (O(1))
2- initialize B Cells With 1; (O(n))
3- For Each Item In A:
if (B.Length <= item) then B[Item] = -1 (O(n))
4- The answer is smallest index in B such that B[index] != -1 (O(n))

Is it possible to invert an array with constant extra space?

Let's say I have an array A with n unique elements on the range [0, n). In other words, I have a permutation of the integers [0, n).
Is possible to transform A into B using O(1) extra space (AKA in-place) such that B[A[i]] = i?
For example:
A B
[3, 1, 0, 2, 4] -> [2, 1, 3, 0, 4]
Yes, it is possible, with O(n^2) time algorithm:
Take element at index 0, then write 0 to the cell indexed by that element. Then use just overwritten element to get next index and write previous index there. Continue until you go back to index 0. This is cycle leader algorithm.
Then do the same starting from index 1, 2, ... But before doing any changes perform cycle leader algorithm without any modifications starting from this index. If this cycle contains any index below the starting index, just skip it.
Or this O(n^3) time algorithm:
Take element at index 0, then write 0 to the cell indexed by that element. Then use just overwritten element to get next index and write previous index there. Continue until you go back to index 0.
Then do the same starting from index 1, 2, ... But before doing any changes perform cycle leader algorithm without any modifications starting from all preceding indexes. If current index is present in any preceding cycle, just skip it.
I have written (slightly optimized) implementation of O(n^2) algorithm in C++11 to determine how many additional accesses are needed for each element on average if random permutation is inverted. Here are the results:
size accesses
2^10 2.76172
2^12 4.77271
2^14 6.36212
2^16 7.10641
2^18 9.05811
2^20 10.3053
2^22 11.6851
2^24 12.6975
2^26 14.6125
2^28 16.0617
While size grows exponentially, number of element accesses grows almost linearly, so expected time complexity for random permutations is something like O(n log n).
Inverting an array A requires us to find a permutation B which fulfills the requirement A[B[i]] == i for all i.
To build the inverse in-place, we have to swap elements and indices by setting A[A[i]] = i for each element A[i]. Obviously, if we would simply iterate through A and perform aforementioned replacement, we might override upcoming elements in A and our computation would fail.
Therefore, we have to swap elements and indices along cycles of A by following c = A[c] until we reach our cycle's starting index c = i.
Every element of A belongs to one such cycle. Since we have no space to store whether or not an element A[i] has already been processed and needs to be skipped, we have to follow its cycle: If we reach an index c < i we would know that this element is part of a previously processed cycle.
This algorithm has a worst-case run-time complexity of O(n²), an average run-time complexity of O(n log n) and a best-case run-time complexity of O(n).
function invert(array) {
main:
for (var i = 0, length = array.length; i < length; ++i) {
// check if this cycle has already been traversed before:
for (var c = array[i]; c != i; c = array[c]) {
if (c <= i) continue main;
}
// Replacing each cycle element with its predecessors index:
var c_index = i,
c = array[i];
do {
var tmp = array[c];
array[c] = c_index; // replace
c_index = c; // move forward
c = tmp;
} while (i != c_index)
}
return array;
}
console.log(invert([3, 1, 0, 2, 4])); // [2, 1, 3, 0, 4]
Example for A = [1, 2, 3, 0] :
The first element 1 at index 0 belongs to the cycle of elements 1 - 2 - 3 - 0. Once we shift indices 0, 1, 2 and 3 along this cycle, we have completed the first step.
The next element 0 at index 1 belongs to the same cycle and our check tells us so in only one step (since it is a backwards step).
The same holds for the remaining elements 1 and 2.
In total, we perform 4 + 1 + 1 + 1 'operations'. This is the best-case scenario.
Implementation of this explanation in Python:
def inverse_permutation_zero_based(A):
"""
Swap elements and indices along cycles of A by following `c = A[c]` until we reach
our cycle's starting index `c = i`.
Every element of A belongs to one such cycle. Since we have no space to store
whether or not an element A[i] has already been processed and needs to be skipped,
we have to follow its cycle: If we reach an index c < i we would know that this
element is part of a previously processed cycle.
Time Complexity: O(n*n), Space Complexity: O(1)
"""
def cycle(i, A):
"""
Replacing each cycle element with its predecessors index
"""
c_index = i
c = A[i]
while True:
temp = A[c]
A[c] = c_index # replace
c_index = c # move forward
c = temp
if i == c_index:
break
for i in range(len(A)):
# check if this cycle has already been traversed before
j = A[i]
while j != i:
if j <= i:
break
j = A[j]
else:
cycle(i, A)
return A
>>> inverse_permutation_zero_based([3, 1, 0, 2, 4])
[2, 1, 3, 0, 4]
This can be done in O(n) time complexity and O(1) space if we try to store 2 numbers at a single position.
First, let's see how we can get 2 values from a single variable. Suppose we have a variable x and we want to get two values from it, 2 and 1. So,
x = n*1 + 2 , suppose n = 5 here.
x = 5*1 + 2 = 7
Now for 2, we can take remainder of x, ie, x%5. And for 1, we can take quotient of x, ie , x/5
and if we take n = 3
x = 3*1 + 2 = 5
x%3 = 5%3 = 2
x/3 = 5/3 = 1
We know here that the array contains values in range [0, n-1], so we can take the divisor as n, size of array. So, we will use the above concept to store 2 numbers at every index, one will represent old value and other will represent the new value.
A B
0 1 2 3 4 0 1 2 3 4
[3, 1, 0, 2, 4] -> [2, 1, 3, 0, 4]
.
a[0] = 3, that means, a[3] = 0 in our answer.
a[a[0]] = 2 //old
a[a[0]] = 0 //new
a[a[0]] = n* new + old = 5*0 + 2 = 2
a[a[i]] = n*i + a[a[i]]
And during array traversal, a[i] value can be greater than n because we are modifying it. So we will use a[i]%n to get the old value.
So the logic should be
a[a[i]%n] = n*i + a[a[i]%n]
Array -> 13 6 15 2 24
Now, to get the older values, take the remainder on dividing each value by n, and to get the new values, just divide each value by n, in this case, n=5.
Array -> 2 1 3 0 4
Following approach Optimizes the cycle walk if it is already handled. Also each element is 1 based. Need to convert accordingly while trying to access the elements in the given array.
enter code here
#include <stdio.h>
#include <iostream>
#include <vector>
#include <bits/stdc++.h>
using namespace std;
// helper function to traverse cycles
void cycle(int i, vector<int>& A) {
int cur_index = i+1, next_index = A[i];
while (next_index > 0) {
int temp = A[next_index-1];
A[next_index-1] = -(cur_index);
cur_index = next_index;
next_index = temp;
if (i+1 == abs(cur_index)) {
break;
}
}
}
void inverse_permutation(vector<int>& A) {
for (int i = 0; i < A.size(); i++) {
cycle(i, A);
}
for (int i = 0; i < A.size(); i++) {
A[i] = abs(A[i]);
}
for (int i = 0; i < A.size(); i++) {
cout<<A[i]<<" ";
}
}
int main(){
// vector<int> perm = {4,0,3,1,2,5,6,7,8};
vector<int> perm = {5,1,4,2,3,6,7,9,8};
//vector<int> perm = { 17,2,15,19,3,7,12,4,18,20,5,14,13,6,11,10,1,9,8,16};
// vector<int> perm = {4, 1, 2, 3};
// { 6,17,9,23,2,10,20,7,11,5,14,13,4,1,25,22,8,24,21,18,19,12,15,16,3 } =
// { 14,5,25,13,10,1,8,17,3,6,9,22,12,11,23,24,2,20,21,7,19,16,4,18,15 }
// vector<int> perm = {6, 17, 9, 23, 2, 10, 20, 7, 11, 5, 14, 13, 4, 1, 25, 22, 8, 24, 21, 18, 19, 12, 15, 16, 3};
inverse_permutation(perm);
return 0;
}

Algorithm to apply permutation in constant memory space

I saw this question is a programming interview book, here I'm simplifying the question.
Assume you have an array A of length n, and you have a permutation array P of length n as well. Your method will return an array where elements of A will appear in the order with indices specified in P.
Quick example: Your method takes A = [a, b, c, d, e] and P = [4, 3, 2, 0, 1]. then it will return [e, d, c, a, b]. You are allowed to use only constant space (i.e. you can't allocate another array, which takes O(n) space).
Ideas?
There is a trivial O(n^2) algorithm, but you can do this in O(n). E.g.:
A = [a, b, c, d, e]
P = [4, 3, 2, 0, 1]
We can swap each element in A with the right element required by P, after each swap, there will be one more element in the right position, and do this in a circular fashion for each of the positions (swap elements pointed with ^s):
[a, b, c, d, e] <- P[0] = 4 != 0 (where a initially was), swap 0 (where a is) with 4
^ ^
[e, b, c, d, a] <- P[4] = 1 != 0 (where a initially was), swap 4 (where a is) with 1
^ ^
[e, a, c, d, b] <- P[1] = 3 != 0 (where a initially was), swap 1 (where a is) with 3
^ ^
[e, d, c, a, b] <- P[3] = 0 == 0 (where a initially was), finish step
After one circle, we find the next element in the array that does not stay in the right position, and do this again. So in the end you will get the result you want, and since each position is touched a constant time (for each position, at most one operation (swap) is performed), it is O(n) time.
You can stored the information of which one is in its right place by:
set the corresponding entry in P to -1, which is unrecoverable: after the operations above, P will become [-1, -1, 2, -1, -1], which denotes that only the second one might be not in the right position, and a further step will make sure it is in the right position and terminates the algorithm;
set the corresponding entry in P to -n - 1: P becomes [-5, -4, 2, -1, -2], which can be recovered in O(n) trivially.
Yet another unnecessary answer! This one preserves the permutation array P explicitly, which was necessary for my situation, but sacrifices in cost. Also this does not require tracking the correctly placed elements. I understand that a previous answer provides the O(N) solution, so I guess this one is just for amusement!
We get best case complexity O(N), worst case O(N^2), and average case O(NlogN). For large arrays (N~10000 or greater), the average case is essentially O(N).
Here is the core algorithm in Java (I mean pseudo-code *cough cough*)
int ind=0;
float temp=0;
for(int i=0; i<(n-1); i++){
// get next index
ind = P[i];
while(ind<i)
ind = P[ind];
// swap elements in array
temp = A[i];
A[i] = A[ind];
A[ind] = temp;
}
Here is an example of the algorithm running (similar to previous answers):
let A = [a, b, c, d, e]
and P = [2, 4, 3, 0, 1]
then expected = [c, e, d, a, b]
i=0: [a, b, c, d, e] // (ind=P[0]=2)>=0 no while loop, swap A[0]<->A[2]
^ ^
i=1: [c, b, a, d, e] // (ind=P[1]=4)>=1 no while loop, swap A[1]<->A[4]
^ ^
i=2: [c, e, a, d, b] // (ind=P[2]=3)>=2 no while loop, swap A[2]<->A[3]
^ ^
i=3a: [c, e, d, a, b] // (ind=P[3]=0)<3 uh-oh! enter while loop...
^
i=3b: [c, e, d, a, b] // loop iteration: ind<-P[0]. now have (ind=2)<3
? ^
i=3c: [c, e, d, a, b] // loop iteration: ind<-P[2]. now have (ind=3)>=3
? ^
i=3d: [c, e, d, a, b] // good index found. Swap A[3]<->A[3]
^
done.
This algorithm can bounce around in that while loop for any indices j<i, up to at most i times during the ith iteration. In the worst case (I think!) each iteration of the outer for loop would result in i extra assignments from the while loop, so we'd have an arithmetic series thing going on, which would add an N^2 factor to the complexity! Running this for a range of N and averaging the number of 'extra' assignments needed by the while loop (averaged over many permutations for each N, that is), though, strongly suggests to me that the average case is O(NlogN).
Thanks!
The simplest case is when there is only a single swap for an element to the destination index. for ex:
array=abcd
perm =1032. you just need two direct swaps: ab swap, cd swap
for other cases, we need to keep swapping until an element reaches its final destination. for ex: abcd, 3021 starting with first element, we swap a and d. we check if a's destination is 0 at perm[perm[0]]. its not, so we swap a with elem at array[perm[perm[0]]] which is b. again we check if a's has reached its destination at perm[perm[perm[0]]] and yes it is. so we stop.
we repeat this for each array index.
Every item is moved in-place only once, so it's O(N) with O(1) storage.
def permute(array, perm):
for i in range(len(array)):
elem, p = array[i], perm[i]
while( p != i ):
elem, array[p] = array[p], elem
elem = array[p]
p = perm[p]
return array
#RinRisson has given the only completely correct answer so far! Every other answer has been something that required extra storage — O(n) stack space, or assuming that the permutation P was conveniently stored adjacent to O(n) unused-but-mutable sign bits, or whatever.
Here's RinRisson's correct answer written out in C++. This passes every test I have thrown at it, including an exhaustive test of every possible permutation of length 0 through 11.
Notice that you don't even need the permutation to be materialized; we can treat it as a completely black-box function OldIndex -> NewIndex:
template<class RandomIt, class F>
void permute(RandomIt first, RandomIt last, const F& p)
{
using IndexType = std::decay_t<decltype(p(0))>;
IndexType n = last - first;
for (IndexType i = 0; i + 1 < n; ++i) {
IndexType ind = p(i);
while (ind < i) {
ind = p(ind);
}
using std::swap;
swap(*(first + i), *(first + ind));
}
}
Or slap a more STL-ish interface on top:
template<class RandomIt, class ForwardIt>
void permute(RandomIt first, RandomIt last, ForwardIt pfirst, ForwardIt plast)
{
assert(std::distance(first, last) == std::distance(pfirst, plast));
permute(first, last, [&](auto i) { return *std::next(pfirst, i); });
}
You can consequently put the desired element to the front of the array, while working with the remaining array of the size (n-1) in the the next iteration step.
The permutation array needs to be accordingly adjusted to reflect the decreasing size of the array. Namely, if the element you placed in the front was found at position "X" you need to decrease by one all the indexes greater or equal to X in the permutation table.
In the case of your example:
array permutation -> adjusted permutation
A = {[a b c d e]} [4 3 2 0 1]
A1 = { e [a b c d]} [3 2 0 1] -> [3 2 0 1] (decrease all indexes >= 4)
A2 = { e d [a b c]} [2 0 1] -> [2 0 1] (decrease all indexes >= 3)
A3 = { e d c [a b]} [0 1] -> [0 1] (decrease all indexes >= 2)
A4 = { e d c a [b]} [1] -> [0] (decrease all indexes >= 0)
Another example:
A0 = {[a b c d e]} [0 2 4 3 1]
A1 = { a [b c d e]} [2 4 3 1] -> [1 3 2 0] (decrease all indexes >= 0)
A2 = { a c [b d e]} [3 2 0] -> [2 1 0] (decrease all indexes >= 2)
A3 = { a c e [b d]} [1 0] -> [1 0] (decrease all indexes >= 2)
A4 = { a c e d [b]} [0] -> [0] (decrease all indexes >= 1)
The algorithm, though not the fastest, avoids the extra memory allocation while still keeping the track of the initial order of elements.
Here a clearer version which takes a swapElements function that accepts indices, e.g., std::swap(Item[cycle], Item[P[cycle]])$
Essentially it runs through all elements and follows the cycles if they haven't been visited yet. Instead of the second check !visited[P[cycle]], we could also compare with the first element in the cycle which has been done somewhere else above.
bool visited[n] = {0};
for (int i = 0; i < n; i++) {
int cycle = i;
while(! visited[cycle] && ! visited[P[cycle]]) {
swapElements(cycle,P[cycle]);
visited[cycle]=true;
cycle = P[cycle];
}
}
Just a simple example C/C++ code addition to the Ziyao Wei's answer. Code is not allowed in comments, so as an answer, sorry:
for (int i = 0; i < count; ++i)
{
// Skip to the next non-processed item
if (destinations[i] < 0)
continue;
int currentPosition = i;
// destinations[X] = Y means "an item on position Y should be at position X"
// So we should move an item that is now at position X somewhere
// else - swap it with item on position Y. Then we have a right
// item on position X, but the original X-item now on position Y,
// maybe should be occupied by someone else (an item Z). So we
// check destinations[Y] = Z and move the X-item further until we got
// destinations[?] = X which mean that on position ? should be an item
// from position X - which is exactly the X-item we've been kicking
// around all this time. Loop closed.
//
// Each permutation has one or more such loops, they obvisouly
// don't intersect, so we may mark each processed position as such
// and once the loop is over go further down by an array from
// position X searching for a non-marked item to start a new loop.
while (destinations[currentPosition] != i)
{
const int target = destinations[currentPosition];
std::swap(items[currentPosition], items[target]);
destinations[currentPosition] = -1 - target;
currentPosition = target;
}
// Mark last current position as swapped before moving on
destinations[currentPosition] = -1 - destinations[currentPosition];
}
for (int i = 0; i < count; ++i)
destinations[i] = -1 - destinations[i];
(for C - replace std::swap with something else)
Traceback what we have swapped by checking index.
Java, O(N) swaps, O(1) space:
static void swap(char[] arr, int x, int y) {
char tmp = arr[x];
arr[x] = arr[y];
arr[y] = tmp;
}
public static void main(String[] args) {
int[] intArray = new int[]{4,2,3,0,1};
char[] charArray = new char[]{'A','B','C','D','E'};
for(int i=0; i<intArray.length; i++) {
int index_to_swap = intArray[i];
// Check index if it has already been swapped before
while (index_to_swap < i) {
// trace back the index
index_to_swap = intArray[index_to_swap];
}
swap(charArray, index_to_swap, i);
}
}
I agree with many solutions here, but below is a very short code snippet that permute throughout a permutation cycle:
def _swap(a, i, j):
a[i], a[j] = a[j], a[i]
def apply_permutation(a, p):
idx = 0
while p[idx] != 0:
_swap(a, idx, p[idx])
idx = p[idx]
So the code snippet below
a = list(range(4))
p = [1, 3, 2, 0]
apply_permutation(a, p)
print(a)
Outputs [2, 4, 3, 1]

Remove duplicates from Array without using Hash Table

i have an array which might contain duplicate elements(more than two duplicates of an element). I wonder if it's possible to find and remove the duplicates in the array:
without using Hash Table (strict requirement)
without using a temporary secondary array. No restrictions on complexity.
P.S: This is not Home work question
Was asked to my friend in yahoo technical interview
Sort the source array. Find consecutive elements that are equal. (I.e. what std::unique does in C++ land). Total complexity is N lg N, or merely N if the input is already sorted.
To remove duplicates, you can copy elements from later in the array over elements earlier in the array also in linear time. Simply keep a pointer to the new logical end of the container, and copy the next distinct element to that new logical end at each step. (Again, exactly like std::unique does (In fact, why not just download an implementation of std::unique and do exactly what it does? :P))
O(NlogN) : Sort and replace consecutive same element with one copy.
O(N2) : Run nested loop to compare each element with the remaining elements in the array, if duplicate found, swap the duplicate with the element at the end of the array and decrease the array size by 1.
No restrictions on complexity.
So this is a piece of cake.
// A[1], A[2], A[3], ... A[i], ... A[n]
// O(n^2)
for(i=2; i<=n; i++)
{
duplicate = false;
for(j=1; j<i; j++)
if(A[i] == A[j])
{duplicate = true; break;}
if(duplicate)
{
// "remove" A[i] by moving all elements from its left over it
for(j=i; j<n; j++)
A[j] = A[j+1];
n--;
}
}
In-place duplicate removal that preserves the existing order of the list, in quadratic time:
for (var i = 0; i < list.length; i++) {
for (var j = i + 1; j < list.length;) {
if (list[i] == list[j]) {
list.splice(j, 1);
} else {
j++;
}
}
}
The trick is to start the inner loop on i + 1 and not increment the inner counter when you remove an element.
The code is JavaScript, splice(x, 1) removes the element at x.
If order preservation isn't an issue, then you can do it quicker:
list.sort();
for (var i = 1; i < list.length;) {
if (list[i] == list[i - 1]) {
list.splice(i, 1);
} else {
i++;
}
}
Which is linear, unless you count the sort, which you should, so it's of the order of the sort -- in most cases n × log(n).
In functional languages you can combine sorting and unicification (is that a real word?) in one pass.
Let's take the standard quick sort algorithm:
- Take the first element of the input (x) and the remaining elements (xs)
- Make two new lists
- left: all elements in xs smaller than or equal to x
- right: all elements in xs larger than x
- apply quick sort on the left and right lists
- return the concatenation of the left list, x, and the right list
- P.S. quick sort on an empty list is an empty list (don't forget base case!)
If you want only unique entries, replace
left: all elements in xs smaller than or equal to x
with
left: all elements in xs smaller than x
This is a one-pass O(n log n) algorithm.
Example implementation in F#:
let rec qsort = function
| [] -> []
| x::xs -> let left,right = List.partition (fun el -> el <= x) xs
qsort left # [x] # qsort right
let rec qsortu = function
| [] -> []
| x::xs -> let left = List.filter (fun el -> el < x) xs
let right = List.filter (fun el -> el > x) xs
qsortu left # [x] # qsortu right
And a test in interactive mode:
> qsortu [42;42;42;42;42];;
val it : int list = [42]
> qsortu [5;4;4;3;3;3;2;2;2;2;1];;
val it : int list = [1; 2; 3; 4; 5]
> qsortu [3;1;4;1;5;9;2;6;5;3;5;8;9];;
val it : int list = [1; 2; 3; 4; 5; 6; 8; 9]
Since it's an interview question it is usually expected by the interviewer to be asked precisions about the problem.
With no alternative storage allowed (that is O(1) storage allowed in that you'll probably use some counters / pointers), it seems obvious that a destructive operation is expected, it might be worth pointing it out to the interviewer.
Now the real question is: do you want to preserve the relative order of the elements ? ie is this operation supposed to be stable ?
Stability hugely impact the available algorithms (and thus the complexity).
The most obvious choice is to list Sorting Algorithms, after all, once the data is sorted, it's pretty easy to get unique elements.
But if you want stability, you cannot actually sort the data (since you could not get the "right" order back) and thus I wonder if it solvable in less than O(N**2) if stability is involved.
doesn't use a hash table per se but i know behind the scenes it's an implementation of one. Nevertheless, thought I might post in case it can help. This is in JavaScript and uses an associative array to record duplicates to pass over
function removeDuplicates(arr) {
var results = [], dups = [];
for (var i = 0; i < arr.length; i++) {
// check if not a duplicate
if (dups[arr[i]] === undefined) {
// save for next check to indicate duplicate
dups[arr[i]] = 1;
// is unique. append to output array
results.push(arr[i]);
}
}
return results;
}
Let me do this in Python.
array1 = [1,2,2,3,3,3,4,5,6,4,4,5,5,5,5,10,10,8,7,7,9,10]
array1.sort()
print(array1)
current = NONE
count = 0
# overwriting the numbers at the frontal part of the array
for item in array1:
if item != current:
array1[count] = item
count +=1
current=item
print(array1)#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 5, 5, 5, 5, 6, 7, 7, 8, 9, 10, 10, 10]
print(array1[:count])#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
The most Efficient method is :
array1 = [1,2,2,3,3,3,4,5,6,4,4,5,5,5,5,10,10,8,7,7,9,10]
array1.sort()
print(array1)
print([*dict.fromkeys(array1)])#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
#OR#
aa = list(dict.fromkeys(array1))
print( aa)#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

Resources