Suppose you have a sequence of numbers, e.g., {1, 2, 3, 4, 5). from which you want all the permutations without repetition, choosing two elements.
Thus, {1,2}, {1,3}, {1,4}, {1,5}, {2,1}, {2,3}, {2,4} ...
Given an index of the permutation number, e.g., the 6th permutation, is there some easy approach to calculate what that permutation (here {2,4} using zero indexing) looks like?
I see approaches for combinations like: https://math.stackexchange.com/questions/1368526/fast-way-to-get-a-combination-given-its-position-in-reverse-lexicographic-or
I am looking for something similar for my problem. I am using C++.
I have looked at Gray Codes, Lermer Codes, Combinadics, Factoradics, etc. but none of these seem quite right.
Thanks for any help.
If you imagine the pairs filling a grid, wrapping to a new row each time the first number has to change. There will be sequence.length-1 columns.
{1,2}, {1,3}, {1,4}, {1,5},
{2,1}, {2,3}, {2,4}, {2,5},
{3,1}, {3,2}, {3,4}, {3,5},
{4,1}, {4,2}, {4,3}, {4,5},
{5,1}, {5,2}, {5,3}, {5,4},
Find the row and column of the permutation number and then look up the values from the sequence.
val s // sequence
val p // 0 based permutation number
val row = p / (s.length-1) // integer divide (round down)
val col = p % (s.length-1) // remainder
if (col >= row) {
col = col + 1 // to prevent repeat
}
val pair = { s[row], s[col] }
Example:
val s = {1, 2, 3, 4, 5} //sequence
val p = 6
row = 1 (6 / 4 rounded down)
col = 2 (remainer of 6 / 4)
col -> 3 increase as larger than or equal to first index
val pair = { 2, 4 }
Related
I am going through this program mentioned here.
Given an array arr[] of N integers. The task is to count the total number of subarrays of the given array such that the difference between the consecutive elements in the subarrays is one. That is, for any index i in the subarrays, arr[i+1] – arr[i] = 1.
Examples:
Input : arr[] = {1, 2, 3}
Output : 3
The subarrays are {1, 2}. {2, 3} and {1, 2, 3}
Input : arr[] = {1, 2, 3, 5, 6, 7}
Output : 6
Efficient Approach: An efficient approach is to observe that in an array of length say K, the total number of subarrays of size greater than 1 = (K)*(K-1)/2.
So, the idea is to traverse the array by using two pointers to calculate subarrays with consecutive elements in a window of maximum length and then calculate all subarrays in that window using the above formula.
Below is the step-by-step algorithm:
Take two pointers to say fast and slow, for maintaining a window of consecutive elements.
Start traversing the array.
If elements differ by 1 increment only the fast pointer.
Else, calculate the length of the current window between the indexes fast and slow.
My question is about the statement An efficient approach is to observe that in an array of length say K, the total number of subarrays of size greater than 1 = (K)*(K-1)/2 How this formula (K)*(K-1)/2 is derived?
The number of subarrays of size 1 is K.
The number of subarrays of size 2 is K-1.
( We need to select subarrays of size 2, hence we can have pair with the indices (0,1), (1,2), .......(K-1,K). In total we can have K-1 such pairs)
The number of subarrays of size 3 is K-2.
...
...
The number of subarrays of size K is 1
So, the number of subarrays of size greater than 1 is
= K-1 + K-2 + K-3+ ...... 1
= (K + K-1 + K-2 + K-3+ ...... 1) - K //adding and removing K
= (K(K+1)/2) - K
= K(K-1)/2
I use rand to reach random three element from a and add this m values to array but I want them to be unique. So, array can't be like this: array = [1,1,2]. How can I check when two elements are equal and how to prevent this other than sample method? I was thinking about this: Let's assume m=1 when times method runs the first time. If m =1 at the second time, I want to skip this value and reach a different one. Is there any code explanation for this ? Or maybe more different way?
a = [1, 2, 3, 4]
array = []
3.times do
m = a[rand(a.size)]
array << m
end
Use shuffle and slice 3 elements:
a = [1, 2, 3, 4]
shuffled = a.shuffle[0..2]
As I understand you wish to write a method similar to Array#sample, that returns a pseudo-random sample of a given size without replacement. I suggest the following, which I believe would be relatively efficient, particularly when the sample size is small or large relative to the size of array.
def sample(arr, sample_size)
n = arr.size
raise ArgumentError if n < sample_size
a = arr.dup
m = (sample_size < n/2) ? sample_size : n - sample_size
m.times do
i = rand(n)
n -= 1
a[i], a[n] = a[n], a[i]
end
n = arr.size
(sample_size < n/2) ? a[n-sample_size..] : a[0, sample_size]
end
a = [7, 5, 7, 1, 9, 6, 2, 0, 6, 7]
Notice that if sample_size >= arr.size/2 I sample arr.size - sample_size elements and return the unsampled elements.
We have an integer array A[] of size N (1 ≤ N ≤ 10^4), which originally is a sorted array with entries 1...N. For any permutation P of size N, the array is shuffled so that i-th entry from the left before the shuffle is at the Ai-th position after the shuffle. You would keep repeating this shuffle until the array is sorted again.
For example, for A[] = {1, 2, 3, 4}, if P = {1, 2, 3, 4}, it would only take one move for the array to be sorted (the entries would move to their original positions). If P = {4, 3, 1, 2}, then it would take 4 moves for the array to be sorted again:
Move 0 | [1, 2, 3, 4]
Move 1 | [3, 4, 2, 1]
Move 2 | [2, 1, 4, 3]
Move 3 | [4, 3, 1, 2]
Move 4 | [1, 2, 3, 4]
The problem is to find the sum of all positive integers J for which you can generate a permutation that requires J moves to get the array sorted again.
Example:
For A[] = {1, 2, 3, 4}, you can generate permutations that require 1, 2, 3, and 4 steps:
Requires 1 move: P = {1, 2, 3, 4}
Requires 2 moves: P = {1, 3, 2, 4}
Requires 3 moves: P = {1, 4, 2, 3}
Requires 4 moves: P = {4, 3, 1, 2}
So you would output 1 + 2 + 3 + 4 = 10.
Some observations I have made is that you can always generate a permutation that requires J moves for (1 ≤ J < N). This is because in the permutation, you would simply shift by 1 all the entries in the range of size J. However, for permutations that requires J moves where J ≥ N, you would need another algorithm.
The brute-force solution would be checking every permutation, or N! permutations which definitely wouldn't fit in run time. I'm looking for an algorithm with run time at most O(N^2).
EDIT 1: A permutation that requires N moves will always be guaranteed as well, as you can create a permutation where every entry is misplaced, and not just swapped with another entry. The question becomes how to find permutations where J > N.
EDIT 2: #ljeabmreosn made the observation that there exists a permutation that takes J steps if and only if there are natural numbers a_1 + ... + a_k = N and LCM(a_1, ..., a_k) = J. So using that observation, the problem comes down to finding all partitions of the array, or partitions of the integer N. However, this won't be a quadratic algorithm - how can I find them efficiently?
Sum of distinct orders of degree-n permutations.
https://oeis.org/A060179
This is the number you are looking for, with a formula, and some maple code.
As often when trying to compute an integer sequence, compute the first few values (here 1, 1, 3, 6, 10, 21) and look for it in the great "On-line Encyclopedia of Integer Sequences".
Here is some python code inspired by it, I think it fits your complexity goals.
def primes_upto(limit):
is_prime = [False] * 2 + [True] * (limit - 1)
for n in range(int(limit**0.5 + 1.5)):
if is_prime[n]:
for i in range(n*n, limit+1, n):
is_prime[i] = False
return [i for i, prime in enumerate(is_prime) if prime]
def sum_of_distinct_order_of_Sn(N):
primes = primes_upto(N)
res = [1]*(N+1)
for p in primes:
for n in range(N,p-1,-1):
pj = p
while pj <= n:
res[n] += res[n-pj] * pj
pj *= p
return res[N]
on my machine:
>%time sum_of_distinct_order_of_Sn(10000)
CPU times: user 2.2 s, sys: 7.54 ms, total: 2.21 s
Wall time: 2.21 s
51341741532026057701809813988399192987996798390239678614311608467285998981748581403905219380703280665170264840434783302693471342230109536512960230
I'm building a decision tree algorithm. The sorting is very expensive in this algorithm because for every split I need to sort each column. So at the beginning - even before tree construction I'm presorting variables - I'm creating a matrix so for each column in the matrix I save its ranking. Then when I want to sort the variable in some split I don't actually sort it but use the presorted ranking array. The problem is that I don't know how to do it in a space efficient manner.
A naive solution of this is below. This is only for 1 variabe (v) and 1 split (split_ind).
import numpy as np
v = np.array([60,70,50,10,20,0,90,80,30,40])
sortperm = v.argsort() #1 sortperm = array([5, 3, 4, 8, 9, 2, 0, 1, 7, 6])
rankperm = sortperm.argsort() #2 rankperm = array([6, 7, 5, 1, 2, 0, 9, 8, 3, 4])
split_ind = np.array([3,6,4,8,9]) # this is my split (random)
# split v and sortperm
v_split = v[split_ind] # v_split = array([10, 90, 20, 30, 40])
rankperm_split = rankperm[split_ind] # rankperm_split = array([1, 9, 2, 3, 4])
vsorted_dummy = np.ones(10)*-1 #3 allocate "empty" array[N]
vsorted_dummy[rankperm_split] = v_split
vsorted = vsorted_dummy[vsorted_dummy!=-1] # vsorted = array([ 10., 20., 30., 40., 90.])
Basically I have 2 questions:
Is double sorting necessary to create ranking array? (#1 and #2)
In the line #3 I'm allocating array[N]. This is very inefficent in terms of space because even if split size n << N I have to allocate whole array. The problem here is how to calculate rankperm_split. In the example original rankperm_split = [1,9,2,3,4] while it should be really [1,5,2,3,4]. This problem can be reformulated so that I want to create a "dense" integer array that has maximum gap of 1 and it keeps the ranking of the array intact.
UPDATE
I think that second point is the key here. This problem can be redefined as
A[N] - array of size N
B[N] - array of size N
I want to transform array A to array B so that:
Ranking of the elements stays the same (for each pair i,j if A[i] < A[j] then B[i] < B[j]
Array B has only elements from 1 to N where each element is unique.
A few examples of this transformation:
[3,4,5] => [1,2,3]
[30,40,50] => [1,2,3]
[30,50,40] => [1,3,2]
[3,4,50] => [1,2,3]
A naive implementation (with sorting) can be defined like this (in Python)
def remap(a):
a_ = sorted(a)
b = [a_.index(e)+1 for e in a]
return b
Let's say I have a fixed sized array. I want to fill the array with either 1s or 2s so that all element sums up to X.
Example:
Required sum = 12
Array size = 7
Possible combinations:
array( 1, 2, 2, 2, 1, 2, 2 ) // sums to 12
array( 1, 1, 2, 2, 2, 2, 2 ) // sums to 12
Find the number of 2's in the array, this number is:
#2's = X - array_size
Chose random arbitrary #2's elements (for example the first elements), and
give them the value 2, the rest of the elements will get the value 1.
Note: it is easy to see that if X < array_size or X> 2*array_size there is no solution to the problem (and obviously the above algorithm will fail)