Array Partition I (How to prove this in math) - arrays

This is a question from leetcode.
Given an integer array nums of 2n integers, group these integers into n pairs (a1, b1), (a2, b2), ..., (an, bn) such that the sum of min(ai, bi) for all i is maximized. Return the maximized sum.
Input: nums = [1, 4, 3, 2]
Output: 4
Explanation: All possible pairings (ignoring the ordering of elements) are:
1. (1, 4), (2, 3) -> min(1, 4) + min(2, 3) = 1 + 2 = 3
2. (1, 3), (2, 4) -> min(1, 3) + min(2, 4) = 1 + 2 = 3
3. (1, 2), (3, 4) -> min(1, 2) + min(3, 4) = 1 + 3 = 4
So the maximum possible sum is 4.
I resolved this by trying multiple examples and found out if I sorted and arranged the pair like 1 2 | 3 4, the min value of each pair is what I want. Since the array is sorted, the positions of min values are fixed; hence I can get the value by index + 2. Although it works, it's more like a guss. Does anyone know how to prove this in mathematics to make the logic more rigourously?
def arrayPairSum(nums: List[int]) -> int:
nums_sort = sorted(nums)
res = 0
i = 0
while i < len(nums):
res += nums_sort[i]
i += 2
return res

You can use induction. Let's say you have a sorted array a. The smallest number, a[0] will always be the smallest of whatever pair it occurs in. The maximum sum will occur if you select its partner to be a[1], the next smallest number. You can show this by selecting some other number a[m] to be its partner. In that case, at least one of the other pairs will have minimum a[1], which is by definition less than it would otherwise have. You can proceed to apply the same argument to the remaining elements a[2:].
Alternatively, you can start from the other end. a[-1] is guaranteed to never figure in the sum because it is the maximum of whatever pair it will occur in. If you pair it with anything other than a[-2], the total sum will not be maximized: some smaller a[m] will represent the pair containing a[-1] in the sum, while a[-2] will be larger than any a[n] it is paired with, and therefore will not appear in the sum.
Both arguments yield the same result: the maximum sum is over the even indices of the sorted array.
As mentioned in the comments, the following two implementations will be more efficient than a raw for loop:
def arrayPairSum(nums: List[int]) -> int:
return sum(sorted(nums)[::2])
OR
def arrayPairSum(nums: List[int]) -> int:
nums.sort()
return sum(nums[::2])
The latter does the sorting in-place, and is probably faster if allowed.

Related

Special Pairs in N natural number sequence

You are given a natural number N which represents sequence [1,2...N]. We have to determine the number of pairs (x,y) from this sequence that satisfies the given conditions.
1 <= x <= y <= N
sum of first x-1 numbers (i.e sum of [1,2,3..x-1]) = sum of numbers from x+1 to y (i.e sum of [x+1...y])
Example:-
If N = 3 there is only 1 pair (x=1,y=1) for which (sum of x-1 numbers) = 0 = (sum of x+1 to y)
any other pairs like (1,2),(1,3) or (2,3) does not satisfy the properties. so the answer is 1 as there is only one pair.
Another Example:-
If N=10, for pair (6,8) we can see sum of x-1 numbers i.e [1,2,3,4,5] = 15 = sum of numbers from x+1 to y i.e [7,8], Also another such pair would be (1,1). No other such pair exists so the answer, in this case, would be 2.
How can we approach and solve such problems to find the number of pairs in such a sequence?
Other things I have been able to deduce so far:-
Condition
Answer
Pairs
If 1<=N<=7
1
{(1,1)}
If 8<=N<=48
2
{(1,1),(6,8)}
If 49<=N<=287
3
{(1,1),(6,8),(35,49)}
If 288<=N<=1680
4
-
I tried but am unable to find any pattern or any such thing in these numbers.
Also, 1<=N<=10^16
--edit--
Courtesy of OEIS (link in comments): you can find the k'th value of y using this formula: ( (0.25) * (3.0+2.0*(2**0.5))**k ).floor
This gives us the k'th value in O(log k). First few results:
1
8
49
288
1681
9800
57121
332928
1940449
11309768
65918161
384199200
2239277041
13051463048
76069501249
443365544448
2584123765441
15061377048200
87784138523761
511643454094368
2982076586042447
17380816062160312
101302819786919424
590436102659356160
3441313796169217536
20057446674355949568
116903366249966469120
681362750825442836480
3971273138702690287616
23146276081390697054208
134906383349641499377664
786292024016458181771264
4582845760749107960348672
26710782540478185822224384
155681849482119992477483008
907380314352241764747706368
Notice that the ratio of successive numbers quickly approaches 5.828427124746. Given a value of n, take the log of n base 5.828427124746. The answer will be an integer close to this log.
E.g., say n = 1,000,000,000. Then log(n, 5.8284271247461) = 11.8. The answer is probably 12, but we can check the neighbors to be sure.
11: 65,918,161
12: 384,199,200
13: 2,239,277,041
Confirmed.
-- end edit --
Here's some ruby code to do this. Idea is to have two pointers and increment the pointer for x or y as appropriate. I'm using s(n) to calculate the sums, though this could be done without multiplication by just keeping a running total.
def s(n)
return n*(n+1)/2
end
def f(n)
count = 0
x = 1
y = 1
while y <= n do
if s(x-1) == s(y) - s(x)
count += 1
puts "(#{x}, #{y})"
end
if s(x-1) <= s(y) - s(x)
x += 1
else
y += 1
end
end
end
Here are the first few pairs:
(1, 1)
(6, 8)
(35, 49)
(204, 288)
(1189, 1681)
(6930, 9800)
(40391, 57121)
(235416, 332928)
(1372105, 1940449)
(7997214, 11309768)
(46611179, 65918161)

Number of ways to fill an array of size n , such that the mexium is greater than every element of the array?

We have given an empty array of size n , we need to fill it with natural numbers (we are allowed to repeat).
The condition that must follow is the mex of the array must be greater than all the elements we fill in the array .
Can someone pls help me with the number of ways to do so ?
(Different arrangements of same set of numbers are also considered distinct)
PS:- by mex of a sequence I mean the smallest non negative number that doesn't occur in the sequence
Number of such arrays is equivalent to the number of ordered distributions of values 1..N into buckets (so [A],[B,C] and [B,C][A] are distinct ones). And number of such distributions is described by ordered Bell numbers 1,3,13,75....
Example for N=3
1 1 1 //1 permutation
1 1 2 //3 permutations
1 2 2 //3 permutations
1 2 3 //6 permutations
//13 variants
Generation of distributions themselves for reference. Note that for N values every value might fall into part 1..K, where K is in range 1..N, so numbers of parts corresponding to all values form continuous sequence without holes (cf. your mex)
To calculate number of such distributions, we can use recurrence from Wiki, Python code:
def cnk(n, k):
k = min(k, n - k)
if k <= 0:
return 1 if k == 0 else 0
res = 1
for i in range(k):
res = res * (n - i) // (i + 1)
return res
def orderedbell(n):
a = [0]*(n+1)
a[0] = 1
for m in range(1, n+1):
for i in range(1, m+1):
a[m] += cnk(m, i) * a[m - i]
return a[n]
for i in range(1,10):
print(orderedbell(i))
1
3
13
75
541
4683
47293
545835
7087261

Number of ways of partitioning an array

Given an array of n elements, a k-partitioning of the array would be to split the array in k contiguous subarrays such that the maximums of the subarrays are non-increasing. Namely max(subarray1) >= max(subarray2) >= ... >= max(subarrayK).
In how many ways can an array be partitioned into valid partitions like the ones mentioned before?
Note: k isn't given as input or anything, I mereley used it to illustrate the general case. A partition could have any size from 1 to n, we just need to find all the valid ones.
Example, the array [3, 2, 1] can be partitioned in 4 ways, you can see them below:
The valid partitions :[3, 2, 1]; [3, [2, 1]]; [[3, 2], 1]; [[3], [2], [1]].
I've found a similar problem related to linear partitioning, but I couldn't find a way to adapt the thinking to this problem. I'm pretty sure this is dynamic programming, but I haven't been able to properly identify
how to model the problem using a recurrence relation.
How would you solve this?
Call an element of the input a tail-max if it is at least as great as all elements that follow. For example, in the following input:
5 9 3 3 1 2
the following elements are tail-maxes:
5 9 3 3 1 2
^ ^ ^ ^
In a valid partition, every subarray must contain the next tail-max at or after the subarray's starting position; otherwise, the next tail-max will be the max of some later subarray, and the condition of non-increasing subarray maximums will be violated.
On the other hand, if every subarray contains the next tail-max at or after the subarray's starting position, then the partition must be valid, as the definition of a tail-max ensures that the maximum of a later subarray cannot be greater.
If we identify the tail-maxes of an array, for example
1 1 9 2 1 6 5 1
. . X . . X X X
where X means tail-max and . means not, then we can't place any subarray boundaries before the first tail-max, because if we do, the first subarray won't contain a tail-max. We can place at most one subarray boundary between a tail-max and the next; if we place more, we get a subarray that doesn't contain a tail-max. The last tail-max must be the last element of the input, so we can't place a subarray boundary after the last tail-max.
If there are m non-tail-max elements between a tail-max and the next, that gives us m+2 options: m+1 places to put an array boundary, or we can choose not to place a boundary between these elements. These factors are multiplicative.
We can make one pass from the end of the input to the start, identifying the lengths of the gaps between tail-maxes and multiplying together the appropriate factors to solve the problem in O(n) time:
def partitions(array):
tailmax = None
factor = 1
result = 1
for i in reversed(array):
if tailmax is None:
tailmax = i
continue
factor += 1
if i >= tailmax:
# i is a new tail-max.
# Multiply the result by a factor indicating how many options we
# have for placing a boundary between i and the old tail-max.
tailmax = i
result *= factor
factor = 1
return result
Update: Sorry I misunderstanding the problem. In this case, split the arrays to sub-arrays where every tails is the max element in the array, then it will work in narrow cases. e.g. [2 4 5 9 6 8 3 1] would be split to [[2 4 5 9] 6 8 9 3 1] first. Then we can freely chose range 0 - 5 to decide whether following are included. You can use an array to record the result of DP. Our goal is res[0]. We already have res[0] = res[5] + res[6] + res[7] + res[8] + res[9] + res[10] in above example and res[10] = 1
def getnum(array):
res = [-1 for x in range(len(array))]
res[0] = valueAt(array, res, 0)
return res[0]
def valueAt(array, res, i):
m = array[i]
idx = i
for index in range(i, len(array), 1):
if array[index] > m:
idx = index
m = array[index]
value = 1;
for index in range(idx + 1, len(array), 1):
if res[index] == -1:
res[index] = valueAt(array, res, index)
value = value + res[index]
return value;
Worse than the answer above in time consuming. DP always costs a lot.
Old Answer: If no duplicate elements in an array is allowed, the following way would work:
Notice that the number of sub-arrays is not depends on the values of elements if no duplicate. We can remark the number is N(n) if there is n elements in array.
The largest element must be in the first sub-arrays, other elements can be in or not in the first sub-array. Depends on whether they are in the first sub-array, the number of partitions for the remaining elements varies.
So,
N(n) = C(n-1, 1)N(n-1) + C(n-1, 2)N(n-2) + ... + C(n-1, n-1)N(0)
where C(n,k) means:
Then it can be solved by DP.
Hope this helps

Counting appropriate number of subarrays in an array excluding some specific pairs?

Let's say, I have an array like this:
1 2 3 4 5
And given pair is (2,3), then number of possible subarrays that don't have (2,3) present in them will be,,
1. 1
2. 2
3. 3
4. 4
5. 5
6. 1 2
7. 3 4
8. 4 5
9. 3 4 5
So, the answer will be 9.
Obviously, there can be more of such pairs.
Now, one method that I thought of is of O(n^2) which involves finding all such elements of maximum length n. Can I do better? Thanks!
Let's see, this adhoc pseudocode should be O(n):
array = [1 2 3 4 5]
pair = [2 3]
length = array.length
n = 0
start = 0
while (start < length)
{
# Find next pair
pair_pos = start
while (pair_pos < length) and (array[pair_pos,pair_pos+1] != pair) # (**1)
{
pair_pos++
}
# Count subarrays
n += calc_number_of_subarrays(pair_pos-start) # (**2)
# Continue after the pair
start = pair_pos+2
}
print n
Note **1: This seems to involve a loop inside the outer loop. Since every element of the array is visited exactly once, both loops together are O(n). In fact, it is probably easy to refactor this to use only one while loop.
Note **2: Given an array of length l, there are l+(l-1)+(l-2)+...+1 subarrays (including the array itself). Which is easy to calculate in O(1), there is no loop involved. c/f Euler. :)
You don't need to find which subarrays are in an array to know how many of them there are. Finding where the pair is in the array is at most 2(n-1) array operations. Then you only need to do a simple calculation with the two lengths you extract from that. The amount of subarrays in an array of length 3 is, for example, 3 + 2 + 1 = 6 = (n(n+1))/2.
The solution uses that in a given array [a, ..., p1, p2, ..., b], the amount of subarrays without the pair is the amount of subarrays for [a, ..., p1] + the amount of subarrays for [p2, ..., b]. If multiple of such pairs exist, we repeat the same trick on [p2, ..., b] as if it was the whole array.
function amount_of_subarrays ::
index := 1
amount := 0
lastmatch := 0
while length( array ) > index do
if array[index] == pair[1] then
if array[index+1] == pair[2] then
length2 := index - lastmatch
amount := amount + ((length2 * (length2 + 1)) / 2)
lastmatch := index
fi
fi
index := index + 1
od
//index is now equal to the length
length2 := index - lastmatch
amount := amount + ((length2 * (length2 + 1)) / 2)
return amount
For an array [1, 2, 3, 4, 5] with pair [2, 3], index will be 2 when the two if-statements are true. amount will be updated to 3 and lastmatch will be updated to 2. No more matches will be found, so lastmatch is 2 and index is 5. amount will be 3 + 6 = 9.

Counting according to query

Given an array of N positive elements. Lets suppose we list all N × (N+1) / 2 non-empty continuous subarrays of the array A and then replaced all the subarrays with the maximum element present in the respective subarray. So now we have N × (N+1) / 2 elements where each element is maximum among its subarray.
Now we are having Q queries, where each query is one of 3 types :
1 K : We need to count of numbers strictly greater than K among those N × (N+1) / 2 elements.
2 K : We need to count of numbers strictly less than K among those N × (N+1) / 2 elements.
3 K : We need to count of numbers equal to K among those N × (N+1) / 2 elements.
Now main problem am facing is N can be upto 10^6. So i can't generate all those N × (N+1) / 2 elements. Please help to solve this porblem.
Example : Let N=3 and we have Q=2. Let array A be [1,2,3] then all sub arrays are :
[1] -> [1]
[2] -> [2]
[3] -> [3]
[1,2] -> [2]
[2,3] -> [3]
[1,2,3] -> [3]
So now we have [1,2,3,2,3,3]. As Q=2 so :
Query 1 : 3 3
It means we need to tell count of numbers equal to 3. So answer is 3 as there are 3 numbers equal to 3 in the generated array.
Query 2 : 1 4
It means we need to tell count of numbers greater than 4. So answer is 0 as no one is greater than 4 in generated array.
Now both N and Q can be up to 10^6. So how to solve this problem. Which data structure should be suitable to solve it.
I believe I have a solution in O(N + Q*log N) (More about time complexity). The trick is to do a lot of preparation with your array before even the first query arrives.
For each number, figure out where is the first number on left / right of this number that is strictly bigger.
Example: for array: 1, 8, 2, 3, 3, 5, 1 both 3's left block would be position of 8, right block would be the position of 5.
This can be determined in linear time. This is how: Keep a stack of previous maximums in a stack. If a new maximum appears, remove maximums from the stack until you get to a element bigger than or equal to the current one. Illustration:
In this example, in the stack is: [15, 13, 11, 10, 7, 3] (you will of course keep the indexes, not the values, I will just use value for better readability).
Now we read 8, 8 >= 3 so we remove 3 from stack and repeat. 8 >= 7, remove 7. 8 < 10, so we stop removing. We set 10 as 8's left block, and add 8 to the maximums stack.
Also, whenever you remove from the stack (3 and 7 in this example), set the right block of removed number to the current number. One problem though: right block would be set to the next number bigger or equal, not strictly bigger. You can fix this with simply checking and relinking right blocks.
Compute what number is how many times a maximum of some subsequence.
Since for each number you now know where is the next left / right bigger number, I trust you with finding appropriate math formula for this.
Then, store the results in a hashmap, key would be a value of a number, and value would be how many times is that number a maximum of some subsequence. For example, record [4->12] would mean that number 4 is the maximum in 12 subsequences.
Lastly, extract all key-value pairs from the hashmap into an array, and sort that array by the keys. Finally, create a prefix sum for the values of that sorted array.
Handle a request
For request "exactly k", just binary search in your array, for more/less thank``, binary search for key k and then use the prefix array.
This answer is an adaptation of this other answer I wrote earlier. The first part is exactly the same, but the others are specific for this question.
Here's an implemented a O(n log n + q log n) version using a simplified version of a segment tree.
Creating the segment tree: O(n)
In practice, what it does is to take an array, let's say:
A = [5,1,7,2,3,7,3,1]
And construct an array-backed tree that looks like this:
In the tree, the first number is the value and the second is the index where it appears in the array. Each node is the maximum of its two children. This tree is backed by an array (pretty much like a heap tree) where the children of the index i are in the indexes i*2+1 and i*2+2.
Then, for each element, it becomes easy to find the nearest greater elements (before and after each element).
To find the nearest greater element to the left, we go up in the tree searching for the first parent where the left node has value greater and the index lesser than the argument. The answer must be a child of this parent, then we go down in the tree looking for the rightmost node that satisfies the same condition.
Similarly, to find the nearest greater element to the right, we do the same, but looking for a right node with an index greater than the argument. And when going down, we look for the leftmost node that satisfies the condition.
Creating the cumulative frequency array: O(n log n)
From this structure, we can compute the frequency array, that tells how many times each element appears as maximum in the subarray list. We just have to count how many lesser elements are on the left and on the right of each element and multiply those values. For the example array ([1, 2, 3]), this would be:
[(1, 1), (2, 2), (3, 3)]
This means that 1 appears only once as maximum, 2 appears twice, etc.
But we need to answer range queries, so it's better to have a cumulative version of this array, that would look like:
[(1, 1), (2, 3), (3, 6)]
The (3, 6) means, for example, that there are 6 subarrays with maxima less than or equal to 3.
Answering q queries: O(q log n)
Then, to answer each query, you just have to make binary searches to find the value you want. For example. If you need to find the exact number of 3, you may want to do: query(F, 3) - query(F, 2). If you want to find those lesser than 3, you do: query(F, 2). If you want to find those greater than 3: query(F, float('inf')) - query(F, 3).
Implementation
I've implemented it in Python and it seems to work well.
import sys, random, bisect
from collections import defaultdict
from math import log, ceil
def make_tree(A):
n = 2**(int(ceil(log(len(A), 2))))
T = [(None, None)]*(2*n-1)
for i, x in enumerate(A):
T[n-1+i] = (x, i)
for i in reversed(xrange(n-1)):
T[i] = max(T[i*2+1], T[i*2+2])
return T
def print_tree(T):
print 'digraph {'
for i, x in enumerate(T):
print ' ' + str(i) + '[label="' + str(x) + '"]'
if i*2+2 < len(T):
print ' ' + str(i)+ '->'+ str(i*2+1)
print ' ' + str(i)+ '->'+ str(i*2+2)
print '}'
def find_generic(T, i, fallback, check, first, second):
j = len(T)/2+i
original = T[j]
j = (j-1)/2
#go up in the tree searching for a value that satisfies check
while j > 0 and not check(T[second(j)], original):
j = (j-1)/2
#go down in the tree searching for the left/rightmost node that satisfies check
while j*2+1<len(T):
if check(T[first(j)], original):
j = first(j)
elif check(T[second(j)], original):
j = second(j)
else:
return fallback
return j-len(T)/2
def find_left(T, i, fallback):
return find_generic(T, i, fallback,
lambda a, b: a[0]>b[0] and a[1]<b[1], #value greater, index before
lambda j: j*2+2, #rightmost first
lambda j: j*2+1 #leftmost second
)
def find_right(T, i, fallback):
return find_generic(T, i, fallback,
lambda a, b: a[0]>=b[0] and a[1]>b[1], #value greater or equal, index after
lambda j: j*2+1, #leftmost first
lambda j: j*2+2 #rightmost second
)
def make_frequency_array(A):
T = make_tree(A)
D = defaultdict(lambda: 0)
for i, x in enumerate(A):
left = find_left(T, i, -1)
right = find_right(T, i, len(A))
D[x] += (i-left) * (right-i)
F = sorted(D.items())
for i in range(1, len(F)):
F[i] = (F[i][0], F[i-1][1] + F[i][1])
return F
def query(F, n):
idx = bisect.bisect(F, (n,))
if idx>=len(F): return F[-1][1]
if F[idx][0]!=n: return 0
return F[idx][1]
F = make_frequency_array([1,2,3])
print query(F, 3)-query(F, 2) #3 3
print query(F, float('inf'))-query(F, 4) #1 4
print query(F, float('inf'))-query(F, 1) #1 1
print query(F, 2) #2 3
You problem can be divided into several steps:
For each element of initial array calculate the number of "subarrays" where current element is maximum. This will involve a bit of combinatorics. First you need for each element to know index of previous and next element that is bigger than current element. Then calculate the number of subarrays as (i - iprev) * (inext - i). Finding iprev and inext requires two traversals of the initial array: in forward and backward order. For iprev you need to traverse you array left to right. During the traversal maintain the BST that contains the biggest of the previous elements along with their index. For each element of original array, find the minimal element in BST that is bigger than current. It's index, stored as value, will be iprev. Then remove from BST all elements that are smaller that current. This operation should be O(logN), as you are removing whole subtrees. This step is required, as current element you are about to add will "override" all element that are less than it. Then add current element to BST with it's index as value. At each point of time, BST will store the descending subsequence of previous elements where each element is bigger than all it's predecessors in array (for previous elements {1,2,44,5,2,6,26,6} BST will store {44,26,6}). The backward traversal to find inext is similar.
After previous step you'll have pairs K→P where K is the value of some element from the initial array and P is the number of subarrays where this element is maxumum. Now you need to group this pairs by K. This means calculating sum of P values of the equal K elements. Be careful about the corner cases when two elements could have share the same subarrays.
As Ritesh suggested: Put all grouped K→P into an array, sort it by K and calculate cumulative sum of P for each element in one pass. It this case your queries will be binary searches in this sorted array. Each query will be performed in O(log(N)) time.
Create a sorted value-to-index map. For example,
[34,5,67,10,100] => {5:1, 10:3, 34:0, 67:2, 100:4}
Precalculate the queries in two passes over the value-to-index map:
Top to bottom - maintain an augmented tree of intervals. Each time an index is added,
split the appropriate interval and subtract the relevant segments from the total:
indexes intervals total sub-arrays with maximum greater than
4 (0,3) 67 => 15 - (4*5/2) = 5
2,4 (0,1)(3,3) 34 => 5 + (4*5/2) - 2*3/2 - 1 = 11
0,2,4 (1,1)(3,3) 10 => 11 + 2*3/2 - 1 = 13
3,0,2,4 (1,1) 5 => 13 + 1 = 14
Bottom to top - maintain an augmented tree of intervals. Each time an index is added,
adjust the appropriate interval and add the relevant segments to the total:
indexes intervals total sub-arrays with maximum less than
1 (1,1) 10 => 1*2/2 = 1
1,3 (1,1)(3,3) 34 => 1 + 1*2/2 = 2
0,1,3 (0,1)(3,3) 67 => 2 - 1 + 2*3/2 = 4
0,1,3,2 (0,3) 100 => 4 - 4 + 4*5/2 = 10
The third query can be pre-calculated along with the second:
indexes intervals total sub-arrays with maximum exactly
1 (1,1) 5 => 1
1,3 (3,3) 10 => 1
0,1,3 (0,1) 34 => 2
0,1,3,2 (0,3) 67 => 3 + 3 = 6
Insertion and deletion in augmented trees are of O(log n) time-complexity. Total precalculation time-complexity is O(n log n). Each query after that ought to be O(log n) time-complexity.

Resources