Permute array to make it alternate between increasing and decreasing - arrays

An array X[1..n] of distinct integers is wobbly if it alternates between increasing and decreasing: X[i] < X[i+1] for every odd index i, and X[i] > X[i+1] for every even index i. For example, the following 16-element array is wobbly:
12, 13, 0, 16, 13, 31, 5, 7, -1, 23, 8, 10, -4, 37, 17, 42
Describe and analyze an algorithm that permutes the elements of a given array to make the array wobbly.
My attempt:
The more obvious solution that comes to mind would be to sort the original array, split it in half, and then alternate between each sub-array, grabbing the first element in the array to create the wobbly array. This would take O(nlogn). (Edit: Just realized this would only work if all of the integers are distinct.) I cant help but think there is a more efficient way to achieve this.
How could this be done?
(This is not a homework problem)

[After 3 years... :-)]
Your problem definition states that all the array elements are distinct. So, you can do better than sorting -- sorting does too much.
Consider that you have a wobbly sequence constructed out of the first k elements. There can be two cases for the last two elements in the sequence:
A[k-1] < A[k]
A[k-1] > A[k]
Case 1: if A[k+1] < A[k], you don't have to do anything because wobbliness is already maintained. However, if A[k+1] > A[k], swapping them will ensure wobbliness is restored.
Case 2: if A[k+1] > A[k], you don't have to do anything because wobbliness is already maintained. However, if A[k+1] < A[k], swapping them will ensure wobbliness is restored.
This gives you an O(n) time and O(1) space algorithm (because you are swapping in place). Your base case is when k = 2, which is trivially wobbly.
Following is an implementation in Python3:
def rearrange_wobbly(A):
if len(A) < 3:
return A
for i in range(2, len(A)):
if A[i - 2] < A[i - 1] < A[i] or A[i - 2] > A[i - 1] > A[i]:
# Swap A[i] and A[i - 1]
A[i - 1], A[i] = A[i], A[i - 1]
>>> A = [x for x in range(10)]
>>> A
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> random.shuffle(A)
>>> A
[3, 2, 1, 0, 7, 6, 9, 8, 4, 5]
>>> rearrange_wobbly(A)
>>> A
[3, 1, 2, 0, 7, 6, 9, 4, 8, 5]

This most straight-forward approach I can think of is to sort the array and then alternate between taking the lowest and the highest remaining element.
E.g. with your example list, sorted:
-4 -1 0 5 7 8 10 12 13 13 16 17 23 31 37 42
The result then becomes
-4 42 -1 37 0 31 5 23 7 17 8 16 10 13 12 13
However, I think this breaks down if you have identical elements toward the middle, so in that scenario you might have to do a bit of manual value substitution towards the end of the sequence to restore the "wobbly" constraint.

Related

Rearrange an array A so that A wins maximum number of comparisons with array B when comparison is done one-on-one

Let's say I have an array A = [3, 6, 7, 5, 3, 5, 6, 2, 9, 1] and B = [2, 7, 0, 9, 3, 6, 0, 6, 2, 6]
Rearrange elements of array A so that when we do comparison element-wise like 3 with 2 and 6 with 7 and so on, we have maximum wins (combinations where A[i] > B[i] are maximum (0<=i<len(A))).
I tried below approach:
def optimal_reorder(A,B,N):
tagged_A = [('d',i) for i in A]
tagged_B = [('a',i) for i in B]
merged = tagged_A + tagged_B
merged = sorted(merged,key=lambda x: x[1])
max_wins = 0
for i in range(len(merged)-1):
print (i)
if set((merged[i][0],merged[i+1][0])) == {'a','d'}:
if (merged[i][0] == 'a') and (merged[i+1][0] == 'd'):
if (merged[i][1] < merged[i+1][1]):
print (merged[i][1],merged[i+1][1])
max_wins += 1
return max_wins
as referenced from
here
but this approach doesn't seem to give correct answer for given A and B i,e if A = [3, 6, 7, 5, 3, 5, 6, 2, 9, 1] and B = [2, 7, 0, 9, 3, 6, 0, 6, 2, 6] then maximum wins is 7 but my algorithm is giving 5.
is there something I am missing here.
revised solution as suggested by #chqrlie
def optimal_reorder2(A,B):
arrA = A.copy()
C = [None] * len(B)
for i in range(len(B)):
k = i + 1
all_ele = []
while (k < len(arrA)):
if arrA[k] > B[i]:
all_ele.append(arrA[k])
k += 1
if all_ele:
e = min(all_ele)
else:
e = min(arrA)
C[i] = e
arrA.remove(e)
return C
How about this algorithm:
start with an empty array C.
for each index i in range(len(B)).
if at least one of the remaining elements of A is larger than B[i], choose e as the smallest of these elements, otherwise choose e as the smallest element of A.
set C[i] = e and remove e from A.
C should be a reordering of A that maximises the number of true comparisons C[i] > B[i].
There’s probably a much better algorithm than this, but you can think of this as a maximum bipartite matching problem. Think of the arrays as the two groups of nodes in the bipartite graph, then add an edge from A[i] to B[j] if A[i] > B[j]. Then any matching tells you how to pair elements of A with elements of B such that the A element “wins” against the B element, and a maximum matching tells you how to do this to maximize the number of wins.
I’m sure there’s a better way to do this, and I’m excited to see what other folks come up with. But this at least shows you can solve this in polynomial time.

Algorithm for array permutation

We have an integer array A[] of size N (1 ≤ N ≤ 10^4), which originally is a sorted array with entries 1...N. For any permutation P of size N, the array is shuffled so that i-th entry from the left before the shuffle is at the Ai-th position after the shuffle. You would keep repeating this shuffle until the array is sorted again.
For example, for A[] = {1, 2, 3, 4}, if P = {1, 2, 3, 4}, it would only take one move for the array to be sorted (the entries would move to their original positions). If P = {4, 3, 1, 2}, then it would take 4 moves for the array to be sorted again:
Move 0 | [1, 2, 3, 4]
Move 1 | [3, 4, 2, 1]
Move 2 | [2, 1, 4, 3]
Move 3 | [4, 3, 1, 2]
Move 4 | [1, 2, 3, 4]
The problem is to find the sum of all positive integers J for which you can generate a permutation that requires J moves to get the array sorted again.
Example:
For A[] = {1, 2, 3, 4}, you can generate permutations that require 1, 2, 3, and 4 steps:
Requires 1 move: P = {1, 2, 3, 4}
Requires 2 moves: P = {1, 3, 2, 4}
Requires 3 moves: P = {1, 4, 2, 3}
Requires 4 moves: P = {4, 3, 1, 2}
So you would output 1 + 2 + 3 + 4 = 10.
Some observations I have made is that you can always generate a permutation that requires J moves for (1 ≤ J < N). This is because in the permutation, you would simply shift by 1 all the entries in the range of size J. However, for permutations that requires J moves where J ≥ N, you would need another algorithm.
The brute-force solution would be checking every permutation, or N! permutations which definitely wouldn't fit in run time. I'm looking for an algorithm with run time at most O(N^2).
EDIT 1: A permutation that requires N moves will always be guaranteed as well, as you can create a permutation where every entry is misplaced, and not just swapped with another entry. The question becomes how to find permutations where J > N.
EDIT 2: #ljeabmreosn made the observation that there exists a permutation that takes J steps if and only if there are natural numbers a_1 + ... + a_k = N and LCM(a_1, ..., a_k) = J. So using that observation, the problem comes down to finding all partitions of the array, or partitions of the integer N. However, this won't be a quadratic algorithm - how can I find them efficiently?
Sum of distinct orders of degree-n permutations.
https://oeis.org/A060179
This is the number you are looking for, with a formula, and some maple code.
As often when trying to compute an integer sequence, compute the first few values (here 1, 1, 3, 6, 10, 21) and look for it in the great "On-line Encyclopedia of Integer Sequences".
Here is some python code inspired by it, I think it fits your complexity goals.
def primes_upto(limit):
is_prime = [False] * 2 + [True] * (limit - 1)
for n in range(int(limit**0.5 + 1.5)):
if is_prime[n]:
for i in range(n*n, limit+1, n):
is_prime[i] = False
return [i for i, prime in enumerate(is_prime) if prime]
def sum_of_distinct_order_of_Sn(N):
primes = primes_upto(N)
res = [1]*(N+1)
for p in primes:
for n in range(N,p-1,-1):
pj = p
while pj <= n:
res[n] += res[n-pj] * pj
pj *= p
return res[N]
on my machine:
>%time sum_of_distinct_order_of_Sn(10000)
CPU times: user 2.2 s, sys: 7.54 ms, total: 2.21 s
Wall time: 2.21 s
51341741532026057701809813988399192987996798390239678614311608467285998981748581403905219380703280665170264840434783302693471342230109536512960230

Finding a median of 2 arrays of the same size - the O(log n) algorithm doesn't yield a correct result

I'm trying to solve the problem of calculating median of merged two sorted arrays of the same size, with distinct elements.
Source of algorithm: https://www.geeksforgeeks.org/median-of-two-sorted-arrays/
This algorithm is used in several sources over the internet as the O(log n) solution. However I don't think it works for the example I made up.
My counter-example:
We have 2 sorted arrays with no duplicates:
[2,3,12,14] & [1,5,8,9]
Merged sorted array is: a = [1,2,3,5,8,9,12,14] Median: 13/2 = 6.5
Following the algorithm:
Median of [2,3,12,14] is (3+12)/2= 7.5 = m1
Median of [1,5,8,9] is (5+8)/2 = 6.5 = m2
We see m1>m2. So following the algorithm, we consider the first half of first array, and second half of second array. We have a1 = [2,3] and a2 = [8,9].
Now we reached a base case and we have a result of (max(a1[0],a2[0]) + min(a1[1],a2[1]))/2 = 8+3=11/2=5.5 which is clearly not 6.5.
This is the only algorithm I see that has O(log n) solution but it seems flawed. Is there something I'm missing here?
To always give the same result of the first method, the second one must end up with the same numbers in the final iteration.
For instance, the example provided should lead to 6.5
[2, 3, 12, 14], [1, 5, 8, 9] → [1, 2, 3, 5, 8, 9, 12, 14] → (5 + 8)/2 → 6.5
To ensure that, when dividing ranges with even number of elements, you must add the element below are beyond the middle:
[2, 3, 12, 14], [1, 5, 8, 9] → [2, 3, 12], [5, 8, 9] → [3, 12], [5, 8] → 6.5
As a matter of fact, the relevant part of the code in the page you linked is this
int getMedian(int ar1[],
int ar2[], int n)
{
// ...
if (m1 < m2)
{
if (n % 2 == 0)
return getMedian(ar1 + n / 2 - 1, // <- Note the difference
ar2, n - n / 2 + 1);
return getMedian(ar1 + n / 2, // <-
ar2, n - n / 2);
}
if (n % 2 == 0)
return getMedian(ar2 + n / 2 - 1, // The same here
ar1, n - n / 2 + 1);
return getMedian(ar2 + n / 2,
ar1, n - n / 2);
Don't trace by hand, run the code.
The Python versions of both algorithms produce the correct answer for your attempted counter-example.
I cannot promise that all implementations work correctly. But remember that it is always much more likely that you made a mistake than that something that has been reviewed by many is wrong. (Not always wrong, which is why I ran actual code on your example.) And the odds of error go up by a lot when you try to trace code by hand.

Find All Numbers in Array which Sum upto Zero

Given an array, the output array consecutive elements where total sum is 0.
Eg:
For input [2, 3, -3, 4, -4, 5, 6, -6, -5, 10],
Output is [3, -3, 4, -4, 5, 6, -6, -5]
I just can't find an optimal solution.
Clarification 1: For any element in the output subarray, there should a subset in the subarray which adds with the element to zero.
Eg: For -5, either one of subsets {[-2, -3], [-1, -4], [-5], ....} should be present in output subarray.
Clarification 2: Output subarray should be all consecutive elements.
Here is a python solution that runs in O(n³):
def conSumZero(input):
take = [False] * len(input)
for i in range(len(input)):
for j in range(i+1, len(input)):
if sum(input[i:j]) == 0:
for k in range(i, j):
take[k] = True;
return numpy.where(take, input)
EDIT: Now more efficient! (Not sure if it's quite O(n²); will update once I finish calculating the complexity.)
def conSumZero(input):
take = [False] * len(input)
cs = numpy.cumsum(input)
cs.insert(0,0)
for i in range(len(input)):
for j in range(i+1, len(input)):
if cs[j] - cs[i] == 0:
for k in range(i, j):
take[k] = True;
return numpy.where(take, input)
The difference here is that I precompute the partial sums of the sequence, and use them to calculate subsequence sums - since sum(a[i:j]) = sum(a[0:j]) - sum(a[0:i]) - rather than iterating each time.
Why not just hash the incremental sum totals and update their indexes as you traverse the array, the winner being the one with largest index range. O(n) time complexity (assuming average hash table complexity).
[2, 3, -3, 4, -4, 5, 6, -6, -5, 10]
sum 0 2 5 2 6 2 7 13 7 2 12
The winner is 2, indexed 1 to 8!
To also guarantee an exact counterpart contiguous-subarray for each number in the output array, I don't yet see a way around checking/hashing all the sum subsequences in the candidate subarrays, which would raise the time complexity to O(n^2).
Based on the example, I assumed that you wanted to find only the ones where 2 values together added up to 0, if you want to include ones that add up to 0 if you add more of them together (like 5 + -2 + -3), then you would need to clarify your parameters a bit more.
The implementation is different based on language, but here is a javascript example that shows the algorithm, which you can implement in any language:
var inputArray = [2, 3, -3, 4, -4, 5, 6, -6, -5, 10];
var ouputArray = [];
for (var i=0;i<inputArray.length;i++){
var num1 = inputArray[i];
for (var x=0;x<inputArray.length;x++){
var num2 = inputArray[x];
var sumVal = num1+num2;
if (sumVal == 0){
outputArray.push(num1);
outputArray.push(num2);
}
}
}
Is this the problem you are trying to solve?
Given a sequence , find maximizing such that
If so, here is the algorithm for solving it:
let $U$ be a set of contiguous integers
for each contiguous $S\in\Bbb Z^+_{\le n}$
for each $\T in \wp\left([i,j)\right)$
if $\sum_{n\in T}a_n = 0$
if $\left|S\right| < \left|U\left$
$S \to u$
return $U$
(Will update with full latex once I get the chance.)

Find the length of the longest contiguous sub-array in a sorted array in which the difference between the end and start values is at most k

I have a sorted array, for example
[0, 0, 3, 6, 7, 8, 8, 8, 10, 11, 13]
Here, let's say k = 1 so the longest sub-array is [7, 8, 8, 8] with length = 4.
As another example, consider [0, 0, 0, 3, 6, 9, 12, 12, 12, 12] with k = 3. Here the longest sub-array is [9, 12, 12, 12, 12] with length = 5.
So far, I have used a binary search algorithm O(n log n) which iterates from index 0 .. n - 1 and tries to find the rightmost index that satisfies our condition.
Is there a linear time algorithm to do this?
Yes, there is a linear time algorithm. You can use two pointers technique. Here is a pseudo code:
R = 0
res = 0
for L = 0 .. N - 1:
while R < N and a[R] - a[L] <= k:
R += 1
res = max(res, R - L)
It has O(n) time complexity because L and R are strictly increasing and each of them can be incremented only n times.
Why is this algorithm correct? For a fixed L, R is the index of the first element of the array such that a[R] - a[L] > k. That's why R - 1 is the index of the last element that fits. The length of [L, R - 1] subarray is exactly R - L. The resulting subarray is obtained by iterating over all possible values of L, that is, all possibilities are checked. That's why it always finds correct answer.

Resources