Permutation of number by desired order - arrays

I want to generate an algorithm for permutation of a list of distinct numbers in a specific order.
example :-
The numbers are
1 2 3 4
Order for permutation is
3 1 4 2
i.e. after permutation first number will go to third place, second to first place, third to fourth place and fourth to second place.
Now the sequence for the numbers will be
2 4 1 3
Now if the algorithm continues to do permutation by same order then after some iteration it will generate the same sequence of inputted numbers and it will stop. For this case total number of iteration is 4.
2 4 1 3
4 3 2 1
3 1 4 2
1 2 3 4
I am doing this by taking another array tmp[] with two other arrays named number[] and order[]. Now I am just copying the elements of number[] in tmp[] by maintaining the position order for particular element from order[] and checking for same number sequence before next iteration. If another iteration is needed then
number[]=tmp[] and the algorithm will repeat previous steps.
Now if the number of elements are large E.g. 10^7 or higher then this method will run slow.
Is there any better solution to find the number of iteration?

If you want to generate the permutation, your solution is already optimal because its complexity equals the size of the output.
However if you are just interested in the number of distinct permutations you can generate you can do much better:
decompose your "order" in cycles: for instance 3 1 4 2 is one cycle 1 -> 3 -> 4 -> 2 -> 1 but 2 1 4 3 is two cycles 1 -> 2 -> 1 and 3 -> 4 -> 3
The number of distinct permutations is lcm(n1, …, np) where n1, …, np are the length of the cycles and lcm is least common multiple.

Related

correctness of fast small order statistic algorithm for odd-length array

Problem 9-3 of the textbook Intro to Algorithms (CLRS) describes a fast O(n) algorithm for finding the k-th order statistic (k-th element in the array when sorted) of a length-n array, for the particular case that k is much smaller than n. I am not certain about the correctness of this algorithm when n is odd, and want to see a way to prove that it is correct.
The basic idea is that we first split the array into two halves, the first with floor(n/2) elements, and the second with ceil(n/2) elements. Then, we "partner" each element in the first half with the corresponding element in the second half. When n is odd this leaves a remaining unpartnered element.
For each pair of partners, we make sure that the left partner is >= the right partner, swapping the two if not. Then, recursively find the k-th order statistic of the second half, mirroring any swaps made in the second half with corresponding swaps in the first half. After this, the k-th order statistic of the entire array must be either in the first k elements in the first half, or the first k elements in the second half.
My confusion comes from the case when the array length n is odd, and there is a lone element in the second half that has no partner. Since the recursion is performed on the second half, consisting of the last ceil(n/2) elements of the array, including the lone partnerless last element, and we are supposed to mirror all swaps made in second half with swaps made within the corresponding partners in the first half, it is unclear what to do when one of the swaps involves the final element, since it has no partner.
The textbook doesn't seem to take particular care on this issue, so I'm assuming that when a swap involves the final element, then just don't make any mirror moves of the partner in the first half at all. As a result, the final element simply "steals" the partner of whoever it got swapped with. However, in this case, is there an easy way to see if the algorithm is still correct? What if when the last element steals someone else's partner, the partner is actually the k-th order statistic, and gets swapped later on to an inaccessible location? The mechanics of the recursion and partitioning involving in order-statistic selection are sufficiently opaque to me such that I cannot confidently rule out that scenario.
I don't think your description of the algorithm is entirely accurate (but then the explanation you linked to is far from clear). As I understand it, the reason why the algorithm is correct for an odd-length array is as follows:
Let's first look at a few examples of even-length arrays, with n=10 and k=3 (i.e. we're looking for the third-smallest element, which is 2):
a. 5 2 7 6 1 9 3 8 4 0
b. 5 1 7 6 2 9 3 8 4 0
c. 5 0 7 6 2 9 3 8 4 1
d. 5 0 7 6 2 9 3 8 1 4
If we split the arrays into two parts, we get:
a. 5 2 7 6 1 9 3 8 4 0
b. 5 1 7 6 2 9 3 8 4 0
c. 5 0 7 6 2 9 3 8 4 1
d. 5 0 7 6 2 9 3 8 1 4
and these couples:
a. (5,9) (2,3) (7,8) (6,4) (1,0) <- 0 coupled with 1
b. (5,9) (1,3) (7,8) (6,4) (2,0) <- 0 coupled with 2
c. (5,9) (0,3) (7,8) (6,4) (2,1) <- 1 coupled with 2
d. (5,9) (0,3) (7,8) (6,1) (2,4) <- 0, 1 and 2 not coupled with each other
After comparing and swapping the couples so that their smallest element is in the first group, we get:
a. 5 2 7 4 0 9 3 8 6 1
b. 5 1 7 4 0 9 3 8 6 2
c. 5 0 7 4 1 9 3 8 6 2
d. 5 0 7 1 2 9 3 8 6 4
You'll see that the smallest element 0 will always be in the first group. The second-smallest element 1 will be either in the first group, or in the second group if it was coupled with the smallest element 0. The third-smallest element 2 will be either in the first group, or in the second group if it was coupled with either the smallest element 0 or the second-smallest element 1.
So the smallest element is in the first group, and the second- and third-smallest elements can be in either group. That means that the third-smallest element is either one of the 3 smallest elements in the first group, or one of the 2 (!) smallest elements in the second group.
a. 5 2 7 4 0 9 3 8 6 1 -> 0 2 4 + 1 3
b. 5 1 7 4 0 9 3 8 6 2 -> 0 1 4 + 2 3
c. 5 0 7 4 1 9 3 8 6 2 -> 0 1 4 + 2 3
d. 5 0 7 1 2 9 3 8 6 4 -> 0 1 2 + 3 4
So if we say that the k-th smallest element of the whole array is now one of the k-th smallest elements in either of the groups, there is an available spot in the the second group, and that's why, in an odd-length array, we'd add the uncoupled element to the second group. Whether or not the uncoupled element is the element we're looking for, it will certainly be one of the k-th smallest elements in either of the groups.
It is in fact more correct to say that the k-th smallest element is either one of the k smallest elements in the first group, or one of the k/2+1 smallest elements in the second group. I'm actually not sure that the algorithm is optimal, or even correct. There's a lot of repeated comparing and swapping going on, and the idea of keeping track of the couples and swapping elements in one group when their corresponding elements in the other group are swapped doesn't seem to make sense.

HeIp understanding Fibonacci Search

On the internet I only find code for the algorithm but I need understand in form of text first because I have trouble understand things from code only. And other description of the algorithm are very complicated for me (on Wikipedia and other sites).
Here is what I understand for far:
Let say we want search in array the element 10:
Index i 0 1 2 3 4
2 3 4 10 40
Some fibonacci number here:
Index j 0 1 2 3 4 5 6 7 8 9
0 1 1 2 3 5 8 13 21 34
First thing we do is find fibonacci number that is greater-equal to array length. Array length is 4 so we need take fibonacci number 5 that is in index position j=5.
But where we divide the array now and how continue? I really don't understand it.. Please help understand for exam...
The algorithm goes in the following way:
The length of the array is 5, so the fibonacci number which is greater than or equal to 5 is 5. The two numbers which are preceding in the Fibonacci sequence are 2 [n-2] and 3 [n-1] - (2, 3, 5).
So, arr[n-2] i.e. arr[2] is compared with the number to be searched which is 10.
If the element is smaller than the number, then the sequence is shifted 1 time to the left. Also, the previous index is saved for next iteration to give an offset for the index. In this case, since 4 is smaller, n-2 becomes 1 (1, 2, 3). arr[1 + 2(prev)] = arr[3] = 10. So, the index of the number is 3.
If the element is larger, the sequence is shifted 2 times to the left.
Always the min(n-2+offset,n)th element is compared with number to get the matching result.

Algorithm to divide array of length n containing numbers from 1 to n (no repetition) into two equal sum

You are giving array of length N and numbers in the array contain 1 to N no repetition. You need to check if the array can be divided into to list of equal sum.
I know it can be solved using subset sum problem whose time complexity is.
Is there an algorithm so that I can reduce the time complexity?
As per your requirements, we conclude the array will always contain numbers 1 to N.
So if Array.Sum()==Even the answer is YES, otherwise NO.
Since the sum of elements from 1 to n equals n*(n+1)/2, you have to check if n*(n+1) is a multiple of 4, which is equivalent to checking if n is a multiple of 4, or if n+1 is a multiple of 4. The complexity of it is O(1).
If this condition is met, the two subsets are :
if n is a multiple of 4: sum up the odd numbers of first half with even numbers of second half on one hand, and even numbers of first half with odd of second half on the other.
For instance, 1 3 5 8 10 12 , and 2 4 6 7 9 11.
if n = 3 modulo 4 : almost the same thing, just split the first 3 between 1 and 2 on one hand, 3 on the other, you have a remaining serie which has a size multiple of 4.
For instance : 1 2 4 7 , and 3 5 6 ; or if you prefer, 3 4 7, and 1 2 5 6.

Smallest "n" sums from n arrays

I was trying to do my friends problem set from a few years ago to sharpen up my knowledge about data structures etc. I came across this problem, and I'm not really sure where to start. Hopefully someone could help me out!
We are given n unsorted arrays, each array has n elements. Ex.
3 1 2
7 6 9
4 9 12
Now, say we take one element from each array, and add them up. Lets just call the sum of these elements an "n-sum".
I need to devise an algorithm that gives us the n smallest "n-sums" (duplicates are allowed).
In our above ex, the answer would be:
11, 12, 12
# 11 comes from: 1 (first array) + 6 (second array) + 4 (third array)
# 12 comes from: 2 (first array) + 6 (second array) + 4 (third array)
# 12 comes from: 1 (first array) + 7 (second array) + 4 (third array)
One of the suggestions given were to use a priority queue.
Thanks!
The time is at least O (n^2): You must visit all array elements, because if all elements were equal to 1000 except on in each row being 0, you would have to look at the n elements equal to 0, or you couldn't find the smallest sum.
Sort each row, taking O (n^2 log n) steps. In each row, subtract the first element from all elements in the row, so the first element in each row is 0; after you found the smallest sums you can compensate for that. Your example turns into
3 1 2 -> 1 2 3 -> 0 1 2
7 6 9 -> 6 7 9 -> 0 1 3
4 9 12 -> 4 9 12-> 0 5 7
Now finding all sums ≤ K can be done in m steps if there are m sums: In the first row, pick all values in turn as long as they are ≤ K. In the second row, pick all values in turn as long as the sum from two rows is ≤ K and so on. Since each row starts with 0, no time is wasted.
For example, sums ≤ 5 are: 0+0+0, 0+0+5, 0+1+0, 0+3+0, 1+0+0, 1+1+0, 1+3+0, 2+0+0, 2+1+0, 2+3+0. Many more than the three that we needed. If we stop after finding 3 sums ≤ 5, we know very quickly "there are at least 3 sums ≤ 5". We need to have an early stop, because in the general case there could be n^n possible sums.
If you pick K = "largest element in the second column", then you know there are at least n+1 sums with a value ≤ K, because you can pick all 0's, or all 0's except one value from the second column. In your example, K = 5 (we know that worked). Let X be the value where there are n sums ≤ X but fewer than n sums ≤ X - 1. We find X with binary search between 0 and K, and then we find the sums. Example:
K = 5 is known to be big enough. We try K = 2, and find 4 sums (actually we stop at 3 sums). Too many. We try K = 1, and there are three solutions 0+0+0, 0+1+0 and 1+0+0. We try K = 0, but only one solution.
This part goes very quick, so we'd try to reduce the time used for sorting. We notice that in this case looking at the first two columns was enough. We can in each row find the two smallest items, and in this case that would be enough. If the two smallest items are not enough to determine the n smallest sums, find the third smallest item etc. where needed. For example, since the 2nd largest item of the last row is 5, we wouldn't need the third item of the row, because even the 5 is not element of a sum if K ≤ 4.

2sum with duplicate values

The classic 2sum question is simple and well-known:
You have an unsorted array, and you are given a value S. Find all pairs of elements in the array that add up to value S.
And it's always been said that this can be solved with the use of HashTable in O(N) time & space complexity or O(NlogN) time and O(1) space complexity by first sorting it and then moving from left and right,
well these two solution are obviously correct BUT I guess not for the following array :
{1,1,1,1,1,1,1,1}
Is it possible to print ALL pairs which add up to 2 in this array in O(N) or O(NlogN) time complexity ?
No, printing out all pairs (including duplicates) takes O(N2). The reason is because the output size is O(N2), thus the running time cannot be less than that (since it takes some constant amount of time to print each element in the output, thus to simply print the output would take CN2 = O(N2) time).
If all the elements are the same, e.g. {1,1,1,1,1}, every possible pair would be in the output:
1. 1 1
2. 1 1
3. 1 1
4. 1 1
5. 1 1
6. 1 1
7. 1 1
8. 1 1
9. 1 1
10. 1 1
This is N-1 + N-2 + ... + 2 + 1 (by taking each element with all elements to the right), which is
N(N-1)/2 = O(N2), which is more than O(N) or O(N log N).
However, you should be able to simply count the pairs in expected O(N) by:
Creating a hash-map map mapping each element to the count of how often it appears.
Looping through the hash-map and summing, for each element x up to S/2 (if we go up to S we'll include the pair x and S-x twice, let map[x] == 0 if x doesn't exist in the map):
map[x]*map[S-x] if x != S-x (which is the number of ways to pick x and S-x)
map[x]*(map[x]-1)/2 if x == S-x (from N(N-1)/2 above).
Of course you can also print the distinct pairs in O(N) by creating a hash-map similar to the above and looping through it, and only outputting x and S-x the value if map[S-x] exists.
Displaying or storing the results is O(N2) only.The worst case as highlighted by you clearly has N2 pairs and to write them onto the screen or storing them into a result array would clearly require at least that much time.In short, you are right!
No
You can pre-compute them in O(nlogn) using sorting but to print them you may need more than O(nlogn).In worst case It can be O(N^2).
Let's modify the algorithm to find all duplicate pairs.
As an example:
a[ ]={ 2 , 4 , 3 , 2 , 9 , 3 , 3 } and sum =6
After sorting:
a[ ] = { 2 , 2 , 3 , 3 , 3 , 4 , 9 }
Suppose you found pair {2,4}, now you have to find count of 2 and 4 and multiply them to get no of duplicate pairs.Here 2 occurs 2 times and 1 occurs 1 times.Hence {2,1} will appear 2*1 = 2 times in output.Now consider special case when both numbers are same then count no of occurrence and sq them .Here { 3,3 } sum to 6. occurrence of 3 in array is 3.Hence { 3,3 } will appear 9 times in output.
In your array {1,1,1,1,1} only pair {1,1} will sum to 2 and count of 1 is 5.hence there are going to 5^2=25 pairs of {1,1} in output.

Resources