Algorithm for array permutation - arrays

We have an integer array A[] of size N (1 ≤ N ≤ 10^4), which originally is a sorted array with entries 1...N. For any permutation P of size N, the array is shuffled so that i-th entry from the left before the shuffle is at the Ai-th position after the shuffle. You would keep repeating this shuffle until the array is sorted again.
For example, for A[] = {1, 2, 3, 4}, if P = {1, 2, 3, 4}, it would only take one move for the array to be sorted (the entries would move to their original positions). If P = {4, 3, 1, 2}, then it would take 4 moves for the array to be sorted again:
Move 0 | [1, 2, 3, 4]
Move 1 | [3, 4, 2, 1]
Move 2 | [2, 1, 4, 3]
Move 3 | [4, 3, 1, 2]
Move 4 | [1, 2, 3, 4]
The problem is to find the sum of all positive integers J for which you can generate a permutation that requires J moves to get the array sorted again.
Example:
For A[] = {1, 2, 3, 4}, you can generate permutations that require 1, 2, 3, and 4 steps:
Requires 1 move: P = {1, 2, 3, 4}
Requires 2 moves: P = {1, 3, 2, 4}
Requires 3 moves: P = {1, 4, 2, 3}
Requires 4 moves: P = {4, 3, 1, 2}
So you would output 1 + 2 + 3 + 4 = 10.
Some observations I have made is that you can always generate a permutation that requires J moves for (1 ≤ J < N). This is because in the permutation, you would simply shift by 1 all the entries in the range of size J. However, for permutations that requires J moves where J ≥ N, you would need another algorithm.
The brute-force solution would be checking every permutation, or N! permutations which definitely wouldn't fit in run time. I'm looking for an algorithm with run time at most O(N^2).
EDIT 1: A permutation that requires N moves will always be guaranteed as well, as you can create a permutation where every entry is misplaced, and not just swapped with another entry. The question becomes how to find permutations where J > N.
EDIT 2: #ljeabmreosn made the observation that there exists a permutation that takes J steps if and only if there are natural numbers a_1 + ... + a_k = N and LCM(a_1, ..., a_k) = J. So using that observation, the problem comes down to finding all partitions of the array, or partitions of the integer N. However, this won't be a quadratic algorithm - how can I find them efficiently?

Sum of distinct orders of degree-n permutations.
https://oeis.org/A060179
This is the number you are looking for, with a formula, and some maple code.
As often when trying to compute an integer sequence, compute the first few values (here 1, 1, 3, 6, 10, 21) and look for it in the great "On-line Encyclopedia of Integer Sequences".
Here is some python code inspired by it, I think it fits your complexity goals.
def primes_upto(limit):
is_prime = [False] * 2 + [True] * (limit - 1)
for n in range(int(limit**0.5 + 1.5)):
if is_prime[n]:
for i in range(n*n, limit+1, n):
is_prime[i] = False
return [i for i, prime in enumerate(is_prime) if prime]
def sum_of_distinct_order_of_Sn(N):
primes = primes_upto(N)
res = [1]*(N+1)
for p in primes:
for n in range(N,p-1,-1):
pj = p
while pj <= n:
res[n] += res[n-pj] * pj
pj *= p
return res[N]
on my machine:
>%time sum_of_distinct_order_of_Sn(10000)
CPU times: user 2.2 s, sys: 7.54 ms, total: 2.21 s
Wall time: 2.21 s
51341741532026057701809813988399192987996798390239678614311608467285998981748581403905219380703280665170264840434783302693471342230109536512960230

Related

circularArrayRotation algorithm ruby

I am using hacker rank and I do not understand why my ruby code only works for one test case out of like 20. Here is the question:
John Watson knows of an operation called a right circular rotation on
an array of integers. One rotation operation moves the last array
element to the first position and shifts all remaining elements right
one. To test Sherlock's abilities, Watson provides Sherlock with an
array of integers. Sherlock is to perform the rotation operation a
number of times then determine the value of the element at a given
position.
For each array, perform a number of right circular rotations and
return the values of the elements at the given indices.
Function Description
Complete the circularArrayRotation function in the editor below.
circularArrayRotation has the following parameter(s):
int a[n]: the array to rotate
int k: the rotation count
int queries[1]: the indices to report
Returns
int[q]: the values in the rotated a as requested in m
Input Format
The first line contains 3 space-separated integers, n, k, and q, the number of elements in the integer array, the rotation count and the number of queries. The second line contains n space-separated integers,
where each integer i describes array element a[i] (where 0 <= i < n). Each of the q subsequent lines contains a single integer, queries[i], an index of an element
in a to return.
Constraints
Sample Input 0
3 2 3
1 2 3
0
1
2
Sample Output 0
2
3
1
Here is my code :
def circularArrayRotation(a, k, queries)
q = []
while k >= 1
m = a.pop()
a.unshift m
k = k - 1
end
for i in queries do
v = a[queries[i]]
q.push v
end
return q
end
It only works for the sample text case but I can't figure out why. Thanks for any help you can provide.
Haven't ran any benchmarks, but this seems like a job for the aptly named Array.rotate() method:
def index_at_rotation (array, num_rotations, queries)
array = array.rotate(-num_rotations)
queries.map {|q| array[q]}
end
a = [1, 2, 3]
k = 2
q = [0,1, 2]
index_at_rotation(a, k, q)
#=> [2, 3, 1]
Handles negative rotation values and nil results as well:
a = [1, 6, 9, 11]
k = -1
q = (1..4).to_a
index_at_rotation(a, k, q)
#=> [9, 11, 1, nil]
I don't see any errors in your code, but I would like to suggest a more efficient way of making the calculation.
First observe that after q rotations the element at index i will at index (i+q) % n.
For example, suppose
n = 3
a = [1,2,3]
q = 5
Then after q rotations the array will be as follows.
arr = Array.new(3)
arr[(0+5) % 3] = a[0] #=> arr[2] = 1
arr[(1+5) % 3] = a[1] #=> arr[0] = 2
arr[(2+5) % 3] = a[2] #=> arr[1] = 3
arr #=> [2,3,1]
We therefore can write
def doit(n,a,q,queries)
n.times.with_object(Array.new(n)) do |i,arr|
arr[(i+q) % n] = a[i]
end.values_at(*queries)
end
doit(3,[1,2,3],5,[0,1,2])
#=> [2,3,1]
doit(3,[1,2,3],5,[2,1])
#=> [1, 3]
doit(3,[1,2,3],2,[0,1,2])
#=> [2, 3, 1]
p doit(3,[1,2,3],0,[0,1,2])
#=> [1,2,3]
doit(20,(0..19).to_a,25,(0..19).to_a.reverse)
#=> [14, 13, 12, 11, 10, 9, 8, 7, 6, 5,
# 4, 3, 2, 1, 0, 19, 18, 17, 16, 15]
Alternatively, we may observe that after q rotations the element at index j was initially at index (j-q) % n.
For the earlier example, after q rotations the array will be
[a[(0-5) % 3], a[(1-5) % 3], a[(2-5) % 3]]
#=> [a[1], a[2], a[0]]
#=> [2,3,1]
We therefore could instead write
def doit(n,a,q,queries)
n.times.map { |j| a[(j-q) % n] }.values_at(*queries)
end

Rearrange an array A so that A wins maximum number of comparisons with array B when comparison is done one-on-one

Let's say I have an array A = [3, 6, 7, 5, 3, 5, 6, 2, 9, 1] and B = [2, 7, 0, 9, 3, 6, 0, 6, 2, 6]
Rearrange elements of array A so that when we do comparison element-wise like 3 with 2 and 6 with 7 and so on, we have maximum wins (combinations where A[i] > B[i] are maximum (0<=i<len(A))).
I tried below approach:
def optimal_reorder(A,B,N):
tagged_A = [('d',i) for i in A]
tagged_B = [('a',i) for i in B]
merged = tagged_A + tagged_B
merged = sorted(merged,key=lambda x: x[1])
max_wins = 0
for i in range(len(merged)-1):
print (i)
if set((merged[i][0],merged[i+1][0])) == {'a','d'}:
if (merged[i][0] == 'a') and (merged[i+1][0] == 'd'):
if (merged[i][1] < merged[i+1][1]):
print (merged[i][1],merged[i+1][1])
max_wins += 1
return max_wins
as referenced from
here
but this approach doesn't seem to give correct answer for given A and B i,e if A = [3, 6, 7, 5, 3, 5, 6, 2, 9, 1] and B = [2, 7, 0, 9, 3, 6, 0, 6, 2, 6] then maximum wins is 7 but my algorithm is giving 5.
is there something I am missing here.
revised solution as suggested by #chqrlie
def optimal_reorder2(A,B):
arrA = A.copy()
C = [None] * len(B)
for i in range(len(B)):
k = i + 1
all_ele = []
while (k < len(arrA)):
if arrA[k] > B[i]:
all_ele.append(arrA[k])
k += 1
if all_ele:
e = min(all_ele)
else:
e = min(arrA)
C[i] = e
arrA.remove(e)
return C
How about this algorithm:
start with an empty array C.
for each index i in range(len(B)).
if at least one of the remaining elements of A is larger than B[i], choose e as the smallest of these elements, otherwise choose e as the smallest element of A.
set C[i] = e and remove e from A.
C should be a reordering of A that maximises the number of true comparisons C[i] > B[i].
There’s probably a much better algorithm than this, but you can think of this as a maximum bipartite matching problem. Think of the arrays as the two groups of nodes in the bipartite graph, then add an edge from A[i] to B[j] if A[i] > B[j]. Then any matching tells you how to pair elements of A with elements of B such that the A element “wins” against the B element, and a maximum matching tells you how to do this to maximize the number of wins.
I’m sure there’s a better way to do this, and I’m excited to see what other folks come up with. But this at least shows you can solve this in polynomial time.

Find K numbers whose product is N , keeping the maximum of K numbers to be minimum

Basically, we are given a number N and K, we need to find an array of size K such that the product of the array elements is N with the maximum of the elements being minimized.
for eg:
420 3
ans: 6 7 10
explanation: 420 can be written as the product of 6,10 and 7. Also it can be written as 5 7 12 but 10(maximum of 6 10 and 7) is minimum than 12(maximum of 5 7 12).
Constraints: numbers>0; 0 <= N < 10^6; 1<=k<=100
What I did so far was to first find the prime factors but after that I can't think of an efficient way to get the sequence.
Basically, amritanshu had a pretty good idea: You have a list of the prime factors and split this list into a list containing the K biggest factors and another containing the other prime factors:
[2, 2], [3, 5, 7]
Then you multiply the biggest element of the first list with the smallest element of the second list and overwrite the element of the second list with the result. Remove the biggest element of the first list. Repeat these steps until your first list is empty:
[2, 2], [3, 5, 7]
[2], [6, 5, 7] // 5 is now the smallest element
[], [6, 10, 7]
here another example:
N = 2310 = 2 * 3 * 5 * 7 * 11
K = 3
[2, 3], [5, 7, 11]
[2], [15, 7, 11]
[], [15, 14, 11]
however, this algorithm is still not the perfect one for some cases like N = 2310, K = 2:
[2, 3, 5], [7, 11]
[2, 3], [35, 11]
[2], [35, 33]
[], [35, 66] // better: [], [42, 55]
So, I thought you actually want to split the factors such that the factors are as close as possible to the Kth root of N. So I come up with this algorithm:
calculate R, the smallest integer bigger than or equal to the Kth root of N
calculate the gcd of R and N
if the gcd is equal to R, add R to the list, call your algorithm recursively with N / R, K-1, add the result to the list and return the list
if the gcd is not equal to R, add it to R and go to step 2
here is a little bit of python code:
import math
def gcd(a, b):
while b:
a, b = b, a % b
return a
def root(N, K):
R = int(math.exp(math.log(N) / K))
if R ** K < N:
R += 1
return R
def find_factors(N, K):
if K == 1:
return [N]
R = root(N, K)
while True:
GCD = gcd(N, R)
if GCD == R:
return [R] + find_factors(N // R, K-1)
R += GCD
EDIT:
I just noticed that this algorithm is still giving incorrect results in many cases. The correct way is incrementing R until it divides N:
def find_factors(N, K):
if K == 1:
return [N]
R = root(N, K)
while True:
if N % R == 0:
return [R] + find_factors(N // R, K-1)
R += 1
This way you don't need gcd.
Overall, I guess you need to factorize N and then essentially make some brute-force approach trying to combine the prime factors into combined factors of roughly equal size. Generally, that should not be too bad, because factorizing is already the most expensive part in many cases.
Original answer (wrong) (see comment by #gus):
Without proof of correctness, assuming N>0, K>0, in pseudo code:
Factorize N into prime factors, store into array F
find smallest integer m>=0 such that length(F) <= 2^m*K
Fill F by 1s to get size 2^m*K.
For i=m down to 1
sort F
for j=1 to 2^(i-1)*K
F[j] = F[j] * F[2^i*K+1-j] (multiply smallest with largest, and so on)
F=F[1:2^(i-1)*K] (delete upper half of F)
F contains result.
Example 420 3:
F={2,2,3,5,7}
m=1
F={1,2,2,3,5,7}
F={7,10,6} DONE
Example 2310 2:
F={2,3,5,7,11}
m=2
F={1,1,1,2,3,5,7,11} (fill to 2^m*K and sort)
F={11,7,5,6} (reduce to half)
F={5,6,7,11} (sort)
F={55, 42} DONE
Example N=17^3*72, K=3
F={2,2,2,3,3,17,17,17}
m=2
F={1,1,1,1,2,2,2,3,3,17,17,17}
F={17,17,17,3,6,4}
F={3,4,6,17,17,17}
F={3,4,6,17,17,17}
F={51,68,102}

Arranging array into all possible pairs

I am working on a problem (in C) that requires me to list all possible connections between an even number of points, so that every point is connected to only one other point. For example, say I have points 1, 2, 3, and 4:
1 - 2, 3 - 4
1 - 3, 2 - 4
1 - 4, 2 - 3
The order of the points doesn't matter (1 - 2 is same as 2 - 1), and the order of connections doesn't too (1 - 2, 3 - 4 same as 3 - 4, 1 - 2).
I am currently trying to simply order the array such as {1, 2, 3, 4} into all possible orderings and check to see if it is already generated. However this can be very expensive and also the ordering of the points and pairs needs to be disregarded.
What would be a better way to arrange an array into all possible pairs? A basic outline of an algorithm would be appreciated!
Edit: in the example above with {1, 2, 3, 4}, if pairings are represented as two adjacent elements in the array, all possible outcomes would be:
{1, 2, 3, 4}: 1 - 2, 3 - 4
{1, 3, 2, 4}: 1 - 3, 2 - 4
{1, 4, 2, 3}: 1 - 4, 2 - 3
I would need the entire arranged array to perform calculations based on all the connections.
This can be accomplished by nondeterministically pairing the rightmost unpaired element and recursing. In C:
void enum_matchings(int n, int a[static n]) {
if (n < 2) {
// do something with the matching
return;
}
for (int i = 0; i < n-1; i++) {
int t = a[i];
a[i] = a[n-2];
a[n-2] = t;
enum_matchings(n-2, a);
a[n-2] = a[i];
a[i] = t;
}
}

Find the length of the longest contiguous sub-array in a sorted array in which the difference between the end and start values is at most k

I have a sorted array, for example
[0, 0, 3, 6, 7, 8, 8, 8, 10, 11, 13]
Here, let's say k = 1 so the longest sub-array is [7, 8, 8, 8] with length = 4.
As another example, consider [0, 0, 0, 3, 6, 9, 12, 12, 12, 12] with k = 3. Here the longest sub-array is [9, 12, 12, 12, 12] with length = 5.
So far, I have used a binary search algorithm O(n log n) which iterates from index 0 .. n - 1 and tries to find the rightmost index that satisfies our condition.
Is there a linear time algorithm to do this?
Yes, there is a linear time algorithm. You can use two pointers technique. Here is a pseudo code:
R = 0
res = 0
for L = 0 .. N - 1:
while R < N and a[R] - a[L] <= k:
R += 1
res = max(res, R - L)
It has O(n) time complexity because L and R are strictly increasing and each of them can be incremented only n times.
Why is this algorithm correct? For a fixed L, R is the index of the first element of the array such that a[R] - a[L] > k. That's why R - 1 is the index of the last element that fits. The length of [L, R - 1] subarray is exactly R - L. The resulting subarray is obtained by iterating over all possible values of L, that is, all possibilities are checked. That's why it always finds correct answer.

Resources