Longest K Sequential Increasing Subsequences - arrays

Why I created a duplicate thread
I created this thread after reading Longest increasing subsequence with K exceptions allowed. I realised that the person who was asking the question hadn't really understood the problem, because he was referring to a link which solves the "Longest Increasing sub-array with one change allowed" problem. So the answers he got were actually irrelevant to LIS problem.
Description of the problem
Suppose that an array A is given with length N.
Find the longest increasing sub-sequence with K exceptions allowed.
Example
1)
N=9 , K=1
A=[3,9,4,5,8,6,1,3,7]
Answer: 7
Explanation:
Longest increasing subsequence is : 3,4,5,8(or 6),1(exception),3,7 -> total=7
N=11 , K=2
A=[5,6,4,7,3,9,2,5,1,8,7]
answer: 8
What I have done so far...
If K=1 then only one exception is allowed. If the known algorithm for computing the Longest Increasing Subsequence in O(NlogN) is used (click here to see this algorithm), then we can compute the LIS starting from A[0] to A[N-1] for each element of array A. We save the results in a new array L with size N. Looking into example n.1 the L array would be:
L=[1,2,2,3,4,4,4,4,5].
Using the reverse logic, we compute array R, each element of which contains the current Longest Decreasing Sequence from N-1 to 0.
The LIS with one exception is just sol=max(sol,L[i]+R[i+1]),
where sol is initialized as sol=L[N-1].
So we compute LIS from 0 until an index i (exception), then stop and start a new LIS until N-1.
A=[3,9,4,5,8,6,1,3,7]
L=[1,2,2,3,4,4,4,4,5]
R=[5,4,4,3,3,3,3,2,1]
Sol = 7
-> step by step explanation:
init: sol = L[N]= 5
i=0 : sol = max(sol,1+4) = 5
i=1 : sol = max(sol,2+4) = 6
i=2 : sol = max(sol,2+3) = 6
i=3 : sol = max(sol,3+3) = 6
i=4 : sol = max(sol,4+3) = 7
i=4 : sol = max(sol,4+3) = 7
i=4 : sol = max(sol,4+2) = 7
i=5 : sol = max(sol,4+1) = 7
Complexity :
O( NlogN + NlogN + N ) = O(NlogN)
because arrays R, L need NlogN time to compute and we also need Θ(N) in order to find sol.
Code for k=1 problem
#include <stdio.h>
#include <vector>
std::vector<int> ends;
int index_search(int value, int asc) {
int l = -1;
int r = ends.size() - 1;
while (r - l > 1) {
int m = (r + l) / 2;
if (asc && ends[m] >= value)
r = m;
else if (asc && ends[m] < value)
l = m;
else if (!asc && ends[m] <= value)
r = m;
else
l = m;
}
return r;
}
int main(void) {
int n, *S, *A, *B, i, length, idx, max;
scanf("%d",&n);
S = new int[n];
L = new int[n];
R = new int[n];
for (i=0; i<n; i++) {
scanf("%d",&S[i]);
}
ends.push_back(S[0]);
length = 1;
L[0] = length;
for (i=1; i<n; i++) {
if (S[i] < ends[0]) {
ends[0] = S[i];
}
else if (S[i] > ends[length-1]) {
length++;
ends.push_back(S[i]);
}
else {
idx = index_search(S[i],1);
ends[idx] = S[i];
}
L[i] = length;
}
ends.clear();
ends.push_back(S[n-1]);
length = 1;
R[n-1] = length;
for (i=n-2; i>=0; i--) {
if (S[i] > ends[0]) {
ends[0] = S[i];
}
else if (S[i] < ends[length-1]) {
length++;
ends.push_back(S[i]);
}
else {
idx = index_search(S[i],0);
ends[idx] = S[i];
}
R[i] = length;
}
max = A[n-1];
for (i=0; i<n-1; i++) {
max = std::max(max,(L[i]+R[i+1]));
}
printf("%d\n",max);
return 0;
}
Generalization to K exceptions
I have provided an algorithm for K=1. I have no clue how to change the above algorithm to work for K exceptions. I would be glad if someone could help me.

This answer is modified from my answer to a similar question at Computer Science Stackexchange.
The LIS problem with at most k exceptions admits a O(n log² n) algorithm using Lagrangian relaxation. When k is larger than log n this improves asymptotically on the O(nk log n) DP, which we will also briefly explain.
Let DP[a][b] denote the length of the longest increasing subsequence with at most b exceptions (positions where the previous integer is larger than the next one) ending at element b a. This DP is not involved in the algorithm, but defining it makes proving the algorithm easier.
For convenience we will assume that all elements are distinct and that the last element in the array is its maximum. Note that this does not limit us, as we can just add m / 2n to the mth appearance of every number, and append infinity to the array and subtract one from the answer. Let V be the permutation for which 1 <= V[i] <= n is the value of the ith element.
To solve the problem in O(nk log n), we maintain the invariant that DP[a][b] has been calculated for b < j. Loop j from 0 to k, at the jth iteration calculating DP[a][j] for all a. To do this, loop i from 1 to n. We maintain the maximum of DP[x][j-1] over x < i and a prefix maximum data structure that at index i will have DP[x][j] at position V[x] for x < i, and 0 at every other position.
We have DP[i][j] = 1 + max(DP[i'][j], DP[x][j-1]) where we go over i', x < i, V[i'] < V[i]. The prefix maximum of DP[x][j-1] gives us the maximum of terms of the second type, and querying the prefix maximum data structure for prefix [0, V[i]] gives us the maximum of terms of the first type. Then update the prefix maximum and prefix maximum data structure.
Here is a C++ implementation of the algorithm. Note that this implementation does not assume that the last element of the array is its maximum, or that the array contains no duplicates.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// Fenwick tree for prefix maximum queries
class Fenwick {
private:
vector<int> val;
public:
Fenwick(int n) : val(n+1, 0) {}
// Sets value at position i to maximum of its current value and
void inc(int i, int v) {
for (++i; i < val.size(); i += i & -i) val[i] = max(val[i], v);
}
// Calculates prefix maximum up to index i
int get(int i) {
int res = 0;
for (++i; i > 0; i -= i & -i) res = max(res, val[i]);
return res;
}
};
// Binary searches index of v from sorted vector
int bins(const vector<int>& vec, int v) {
int low = 0;
int high = (int)vec.size() - 1;
while(low != high) {
int mid = (low + high) / 2;
if (vec[mid] < v) low = mid + 1;
else high = mid;
}
return low;
}
// Compresses the range of values to [0, m), and returns m
int compress(vector<int>& vec) {
vector<int> ord = vec;
sort(ord.begin(), ord.end());
ord.erase(unique(ord.begin(), ord.end()), ord.end());
for (int& v : vec) v = bins(ord, v);
return ord.size();
}
// Returns length of longest strictly increasing subsequence with at most k exceptions
int lisExc(int k, vector<int> vec) {
int n = vec.size();
int m = compress(vec);
vector<int> dp(n, 0);
for (int j = 0;; ++j) {
Fenwick fenw(m+1); // longest subsequence with at most j exceptions ending at this value
int max_exc = 0; // longest subsequence with at most j-1 exceptions ending before this
for (int i = 0; i < n; ++i) {
int off = 1 + max(max_exc, fenw.get(vec[i]));
max_exc = max(max_exc, dp[i]);
dp[i] = off;
fenw.inc(vec[i]+1, off);
}
if (j == k) return fenw.get(m);
}
}
int main() {
int n, k;
cin >> n >> k;
vector<int> vec(n);
for (int i = 0; i < n; ++i) cin >> vec[i];
int res = lisExc(k, vec);
cout << res << '\n';
}
Now we will return to the O(n log² n) algorithm. Select some integer 0 <= r <= n. Define DP'[a][r] = max(DP[a][b] - rb), where the maximum is taken over b, MAXB[a][r] as the maximum b such that DP'[a][r] = DP[a][b] - rb, and MINB[a][r] similarly as the minimum such b. We will show that DP[a][k] = DP'[a][r] + rk if and only if MINB[a][r] <= k <= MAXB[a][r]. Further, we will show that for any k exists an r for which this inequality holds.
Note that MINB[a][r] >= MINB[a][r'] and MAXB[a][r] >= MAXB[a][r'] if r < r', hence if we assume the two claimed results, we can do binary search for the r, trying O(log n) values. Hence we achieve complexity O(n log² n) if we can calculate DP', MINB and MAXB in O(n log n) time.
To do this, we will need a segment tree that stores tuples P[i] = (v_i, low_i, high_i), and supports the following operations:
Given a range [a, b], find the maximum value in that range (maximum v_i, a <= i <= b), and the minimum low and maximum high paired with that value in the range.
Set the value of the tuple P[i]
This is easy to implement with complexity O(log n) time per operation assuming some familiarity with segment trees. You can refer to the implementation of the algorithm below for details.
We will now show how to compute DP', MINB and MAXB in O(n log n). Fix r. Build the segment tree initially containing n+1 null values (-INF, INF, -INF). We maintain that P[V[j]] = (DP'[j], MINB[j], MAXB[j]) for j less than the current position i. Set DP'[0] = 0, MINB[0] = 0 and MAXB[0] to 0 if r > 0, otherwise to INF and P[0] = (DP'[0], MINB[0], MAXB[0]).
Loop i from 1 to n. There are two types of subsequences ending at i: those where the previous element is greater than V[i], and those where it is less than V[i]. To account for the second kind, query the segment tree in the range [0, V[i]]. Let the result be (v_1, low_1, high_1). Set off1 = (v_1 + 1, low_1, high_1). For the first kind, query the segment tree in the range [V[i], n]. Let the result be (v_2, low_2, high_2). Set off2 = (v_2 + 1 - r, low_2 + 1, high_2 + 1), where we incur the penalty of r for creating an exception.
Then we combine off1 and off2 into off. If off1.v > off2.v set off = off1, and if off2.v > off1.v set off = off2. Otherwise, set off = (off1.v, min(off1.low, off2.low), max(off1.high, off2.high)). Then set DP'[i] = off.v, MINB[i] = off.low, MAXB[i] = off.high and P[i] = off.
Since we make two segment tree queries at every i, this takes O(n log n) time in total. It is easy to prove by induction that we compute the correct values DP', MINB and MAXB.
So in short, the algorithm is:
Preprocess, modifying values so that they form a permutation, and the last value is the largest value.
Binary search for the correct r, with initial bounds 0 <= r <= n
Initialise the segment tree with null values, set DP'[0], MINB[0] and MAXB[0].
Loop from i = 1 to n, at step i
Querying ranges [0, V[i]] and [V[i], n] of the segment tree,
calculating DP'[i], MINB[i] and MAXB[i] based on those queries, and
setting the value at position V[i] in the segment tree to the tuple (DP'[i], MINB[i], MAXB[i]).
If MINB[n][r] <= k <= MAXB[n][r], return DP'[n][r] + kr - 1.
Otherwise, if MAXB[n][r] < k, the correct r is less than the current r. If MINB[n][r] > k, the correct r is greater than the current r. Update the bounds on r and return to step 1.
Here is a C++ implementation for this algorithm. It also finds the optimal subsequence.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
using ll = long long;
const int INF = 2 * (int)1e9;
pair<ll, pair<int, int>> combine(pair<ll, pair<int, int>> le, pair<ll, pair<int, int>> ri) {
if (le.first < ri.first) swap(le, ri);
if (ri.first == le.first) {
le.second.first = min(le.second.first, ri.second.first);
le.second.second = max(le.second.second, ri.second.second);
}
return le;
}
// Specialised range maximum segment tree
class SegTree {
private:
vector<pair<ll, pair<int, int>>> seg;
int h = 1;
pair<ll, pair<int, int>> recGet(int a, int b, int i, int le, int ri) const {
if (ri <= a || b <= le) return {-INF, {INF, -INF}};
else if (a <= le && ri <= b) return seg[i];
else return combine(recGet(a, b, 2*i, le, (le+ri)/2), recGet(a, b, 2*i+1, (le+ri)/2, ri));
}
public:
SegTree(int n) {
while(h < n) h *= 2;
seg.resize(2*h, {-INF, {INF, -INF}});
}
void set(int i, pair<ll, pair<int, int>> off) {
seg[i+h] = combine(seg[i+h], off);
for (i += h; i > 1; i /= 2) seg[i/2] = combine(seg[i], seg[i^1]);
}
pair<ll, pair<int, int>> get(int a, int b) const {
return recGet(a, b+1, 1, 0, h);
}
};
// Binary searches index of v from sorted vector
int bins(const vector<int>& vec, int v) {
int low = 0;
int high = (int)vec.size() - 1;
while(low != high) {
int mid = (low + high) / 2;
if (vec[mid] < v) low = mid + 1;
else high = mid;
}
return low;
}
// Finds longest strictly increasing subsequence with at most k exceptions in O(n log^2 n)
vector<int> lisExc(int k, vector<int> vec) {
// Compress values
vector<int> ord = vec;
sort(ord.begin(), ord.end());
ord.erase(unique(ord.begin(), ord.end()), ord.end());
for (auto& v : vec) v = bins(ord, v) + 1;
// Binary search lambda
int n = vec.size();
int m = ord.size() + 1;
int lambda_0 = 0;
int lambda_1 = n;
while(true) {
int lambda = (lambda_0 + lambda_1) / 2;
SegTree seg(m);
if (lambda > 0) seg.set(0, {0, {0, 0}});
else seg.set(0, {0, {0, INF}});
// Calculate DP
vector<pair<ll, pair<int, int>>> dp(n);
for (int i = 0; i < n; ++i) {
auto off0 = seg.get(0, vec[i]-1); // previous < this
off0.first += 1;
auto off1 = seg.get(vec[i], m-1); // previous >= this
off1.first += 1 - lambda;
off1.second.first += 1;
off1.second.second += 1;
dp[i] = combine(off0, off1);
seg.set(vec[i], dp[i]);
}
// Is min_b <= k <= max_b?
auto off = seg.get(0, m-1);
if (off.second.second < k) {
lambda_1 = lambda - 1;
} else if (off.second.first > k) {
lambda_0 = lambda + 1;
} else {
// Construct solution
ll r = off.first + 1;
int v = m;
int b = k;
vector<int> res;
for (int i = n-1; i >= 0; --i) {
if (vec[i] < v) {
if (r == dp[i].first + 1 && dp[i].second.first <= b && b <= dp[i].second.second) {
res.push_back(i);
r -= 1;
v = vec[i];
}
} else {
if (r == dp[i].first + 1 - lambda && dp[i].second.first <= b-1 && b-1 <= dp[i].second.second) {
res.push_back(i);
r -= 1 - lambda;
v = vec[i];
--b;
}
}
}
reverse(res.begin(), res.end());
return res;
}
}
}
int main() {
int n, k;
cin >> n >> k;
vector<int> vec(n);
for (int i = 0; i < n; ++i) cin >> vec[i];
vector<int> ans = lisExc(k, vec);
for (auto i : ans) cout << i+1 << ' ';
cout << '\n';
}
We will now prove the two claims. We wish to prove that
DP'[a][r] = DP[a][b] - rb if and only if MINB[a][r] <= b <= MAXB[a][r]
For all a, k there exists an integer r, 0 <= r <= n, such that MINB[a][r] <= k <= MAXB[a][r]
Both of these follow from the concavity of the problem. Concavity means that DP[a][k+2] - DP[a][k+1] <= DP[a][k+1] - DP[a][k] for all a, k. This is intuitive: the more exceptions we are allowed to make, the less allowing one more helps us.
Fix a and r. Set f(b) = DP[a][b] - rb, and d(b) = f(b+1) - f(b). We have d(k+1) <= d(k) from the concavity of the problem. Assume x < y and f(x) = f(y) >= f(i) for all i. Hence d(x) <= 0, thus d(i) <= 0 for i in [x, y). But f(y) = f(x) + d(x) + d(x + 1) + ... + d(y - 1), hence d(i) = 0 for i in [x, y). Hence f(y) = f(x) = f(i) for i in [x, y]. This proves the first claim.
To prove the second, set r = DP[a][k+1] - DP[a][k] and define f, d as previously. Then d(k) = 0, hence d(i) >= 0 for i < k and d(i) <= 0 for i > k, hence f(k) is maximal as desired.
Proving concavity is more difficult. For a proof, see my answer at cs.stackexchange.

Related

Find the number of subarrays of odd lengths that have a median equal to k

Find the number of subarrays of odd lengths that have a median equal to k.
For example: array = [5,3,1,4,7,7], k=4 then there are 4 odd length subarrays with 4 as their median: [4], [1,4,7], [5,3,1,4,7], [3,1,4,7,7] therefore return 4 as the answer.
Can anyone please help me with this subarray problem, I'm not sure how to get the output.
Recently encountered this problem in Online Assessment.
However 'k' is index and 1 <= k <= n where n is length of array.
We have to find how many subarrays have arr[k] as median, also subarray has to be odd length. This is the only hint we need.
Since subarrays are odd length it can be (arr[k]) or (1 element on left and right), (2 elements on left and right), so on...
We can maintain smaller and bigger arrays of length n and populate them as follows:
if(arr[i] < arr[k])
smaller[i] = 1;
else
smaller[i] = 0;
for bigger elements than arr[k]:
if(arr[i] > arr[k])
bigger[i] = 1;
else
bigger[i] = 0;
This helps us to find in range i...j where i <= j, count of smaller and bigger elements with respect to arr[k].
For arr[k] to be median in the range [i,j], The following condition has to hold.
(smaller[j] - smaller[i - 1]) = (bigger[j] - bigger[i - 1])
In other words difference between a number of smaller and bigger elements in the range [i, j] is 0.
we create new array d of length n, such that
d[i] = smaller[i] - bigger[i]
Now problem reduces to finding the number of subarrays having a sum of 0;
But not all subarrays having sum 0 are useful to us.
We don't care about the subarrays that do not include 'k'. So,
ans = subarray_sum_zero(1, n, d) - subarray_sum_zero(1, k - 1, d) - subarray_sum_zero(k + 1, n, d)
subarray_sum_zero function finds the number of subarrays in array d.
You can find the subarray sum equals k using the map in linear time complexity.
overall runtime complexity is O(n) and space complexity is O(n).
It should be able pass the tests n = 1e5.
#adf_hater 's logic is correct ( because median is middle element so smaller elements has to be equal to number of bigger elements) . Here is the code using same logic
int sum(int start, int end, vector<int>& v) {
unordered_map<ll, ll> prevSum;
int res = 0, currSum = 0;
for (int i = start; i < end; i++) {
currSum += v[i];
if (currSum == 0)
res++;
if (prevSum.find(currSum - 0) != prevSum.end())
res += (prevSum[currSum - 0]);
prevSum[currSum]++;
}
return res ;
}
void solve(int n, vector<int>&v, int k){
vector<int>smaller(n, 0), bigger(n, 0), d(n, 0) ;
k-= 1;
for(int i = 0 ; i < n; i++)
smaller[i] = v[i] < v[k];
for(int i = 0 ; i < n; i++)
bigger[i] = v[i] > v[k] ;
for(int i = 0 ; i< n; i++)
d[i] = smaller[i] - bigger[i] ;
cout<< sum(0, n, d) - sum(0, k, d) - sum(k+1, n, d) ;
}

Search unsorted array for 3 elements which sum to a value

I am trying to make an algorithm, of Θ( n² ).
It accepts an unsorted array of n elements, and an integer z,
and has to return 3 indices of 3 different elements a,b,c ; so a+b+c = z.
(return NILL if no such integers were found)
I tried to sort the array first, in two ways, and then to search the sorted array.
but since I need a specific running time for the rest of the algorithm, I am getting lost.
Is there any way to do it without sorting? (I guess it does have to be sorted) either with or without sorting would be good.
example:
for this array : 1, 3, 4, 2, 6, 7, 9 and the integer 6
It has to return: 0, 1, 3
because ( 1+3+2 = 6)
Algorithm
Sort - O(nlogn)
for i=0... n-1 - O(1) assigning value to i
new_z = z-array[i] this value is updated each iteration. Now, search for new_z using two pointers, at begin (index 0) and end (index n-1) If sum (array[ptr_begin] + array[ptr_ens]) is greater then new_z, subtract 1 from the pointer at top. If smaller, add 1 to begin pointer. Otherwise return i, current positions of end and begin. - O(n)
jump to step 2 - O(1)
Steps 2, 3 and 4 cost O(n^2). Overall, O(n^2)
C++ code
#include <iostream>
#include <vector>
#include <algorithm>
int main()
{
std::vector<int> vec = {3, 1, 4, 2, 9, 7, 6};
std::sort(vec.begin(), vec.end());
int z = 6;
int no_success = 1;
//std::for_each(vec.begin(), vec.end(), [](auto const &it) { std::cout << it << std::endl;});
for (int i = 0; i < vec.size() && no_success; i++)
{
int begin_ptr = 0;
int end_ptr = vec.size()-1;
int new_z = z-vec[i];
while (end_ptr > begin_ptr)
{
if(begin_ptr == i)
begin_ptr++;
if (end_ptr == i)
end_ptr--;
if ((vec[begin_ptr] + vec[end_ptr]) > new_z)
end_ptr--;
else if ((vec[begin_ptr] + vec[end_ptr]) < new_z)
begin_ptr++;
else {
std::cout << "indices are: " << end_ptr << ", " << begin_ptr << ", " << i << std::endl;
no_success = 0;
break;
}
}
}
return 0;
}
Beware, result is the sorted indices. You can maintain the original array, and then search for the values corresponding to the sorted array. (3 times O(n))
The solution for the 3 elements which sum to a value (say v) can be done in O(n^2), where n is the length of the array, as follows:
Sort the given array. [ O(nlogn) ]
Fix the first element , say e1. (iterating from i = 0 to n - 1)
Now we have to find the sum of 2 elements sum to a value (v - e1) in range from i + 1 to n - 1. We can solve this sub-problem in O(n) time complexity using two pointers where left pointer will be pointing at i + 1 and right pointer will be pointing at n - 1 at the beginning. Now we will move our pointers either from left or right depending upon the total current sum is greater than or less than required sum.
So, overall time complexity of the solution will be O(n ^ 2).
Update:
I attached solution in c++ for the reference: (also, added comments to explain time complexity).
vector<int> sumOfthreeElements(vector<int>& ar, int v) {
sort(ar.begin(), ar.end());
int n = ar.size();
for(int i = 0; i < n - 2 ; ++i){ //outer loop runs `n` times
//for every outer loop inner loops runs upto `n` times
//therefore, overall time complexity is O(n^2).
int lo = i + 1;
int hi = n - 1;
int required_sum = v - ar[i];
while(lo < hi) {
int current_sum = ar[lo] + ar[hi];
if(current_sum == required_sum) {
return {i, lo, hi};
} else if(current_sum > required_sum){
hi--;
}else lo++;
}
}
return {};
}
I guess this is similar to LeetCode 15 and 16:
LeetCode 16
Python
class Solution:
def threeSumClosest(self, nums, target):
nums.sort()
closest = nums[0] + nums[1] + nums[2]
for i in range(len(nums) - 2):
j = -~i
k = len(nums) - 1
while j < k:
summation = nums[i] + nums[j] + nums[k]
if summation == target:
return summation
if abs(summation - target) < abs(closest - target):
closest = summation
if summation < target:
j += 1
elif summation > target:
k -= 1
return closest
Java
class Solution {
public int threeSumClosest(int[] nums, int target) {
Arrays.sort(nums);
int closest = nums[0] + nums[nums.length >> 1] + nums[nums.length - 1];
for (int first = 0; first < nums.length - 2; first++) {
int second = -~first;
int third = nums.length - 1;
while (second < third) {
int sum = nums[first] + nums[second] + nums[third];
if (sum > target)
third--;
else
second++;
if (Math.abs(sum - target) < Math.abs(closest - target))
closest = sum;
}
}
return closest;
}
}
LeetCode 15
Python
class Solution:
def threeSum(self, nums):
res = []
nums.sort()
for i in range(len(nums) - 2):
if i > 0 and nums[i] == nums[i - 1]:
continue
lo, hi = -~i, len(nums) - 1
while lo < hi:
tsum = nums[i] + nums[lo] + nums[hi]
if tsum < 0:
lo += 1
if tsum > 0:
hi -= 1
if tsum == 0:
res.append((nums[i], nums[lo], nums[hi]))
while lo < hi and nums[lo] == nums[-~lo]:
lo += 1
while lo < hi and nums[hi] == nums[hi - 1]:
hi -= 1
lo += 1
hi -= 1
return res
Java
class Solution {
public List<List<Integer>> threeSum(int[] nums) {
Arrays.sort(nums);
List<List<Integer>> res = new LinkedList<>();
for (int i = 0; i < nums.length - 2; i++) {
if (i == 0 || (i > 0 && nums[i] != nums[i - 1])) {
int lo = -~i, hi = nums.length - 1, sum = 0 - nums[i];
while (lo < hi) {
if (nums[lo] + nums[hi] == sum) {
res.add(Arrays.asList(nums[i], nums[lo], nums[hi]));
while (lo < hi && nums[lo] == nums[-~lo])
lo++;
while (lo < hi && nums[hi] == nums[hi - 1])
hi--;
lo++;
hi--;
} else if (nums[lo] + nums[hi] < sum) {
lo++;
} else {
hi--;
}
}
}
}
return res;
}
}
Reference
You can see the explanations in the following links:
LeetCode 15 - Discussion Board
LeetCode 16 - Discussion Board
LeetCode 15 - Solution
You can use something like:
def find_3sum_restr(items, z):
# : find possible items to consider -- O(n)
candidates = []
min_item = items[0]
for i, item in enumerate(items):
if item < z:
candidates.append(i)
if item < min_item:
min_item = item
# : find possible couples to consider -- O(n²)
candidates2 = []
for k, i in enumerate(candidates):
for j in candidates[k:]:
if items[i] + items[j] <= z - min_item:
candidates2.append([i, j])
# : find the matching items -- O(n³)
for i, j in candidates2:
for k in candidates:
if items[i] + items[j] + items[k] == z:
return i, j, k
This O(n + n² + n³), hence O(n³).
While this is reasonably fast for randomly distributed inputs (perhaps O(n²)?), unfortunately, in the worst case (e.g. for an array of all ones, with a z > 3), this is no better than the naive approach:
def find_3sum_naive(items, z):
n = len(items)
for i in range(n):
for j in range(i, n):
for k in range(j, n):
if items[i] + items[j] + items[k] == z:
return i, j, k

Finding a number that is sum of two other numbers in a sorted array

As it said in the topic, I have to check if there is a number that is the sum of two other numbers in a sorted array.
In first part of the question (for a unsorted array) I wrote a solution, just doing 3 loops and checking all the combinations.
Now, I can't understand how to build the most efficient algorithm to do the same, but with a sorted array.
Numbers are of type int (negative or positive) and any number can appear more then once.
Can somebody give a clue about that logic problem ?
None of the solutions given solve the question asked. The question asks to find a number inside the array that equals the sum of two other numbers in the same array. We aren't given a target sum beforehand. We're just given an array.
I've come up with a solution that runs in O(n2) running time and O(1) space complexity in the best case, and O(n) space complexity in the worst case (depending on the sort):
def hasSumOfTwoOthers(nums):
nums.sort()
for i in range(len(nums)):
left, right = 0, i - 1
while left < right:
s = nums[left] + nums[right]
if s == nums[i]:
return True
if s < nums[i]:
left += 1
else:
right -= 1
return False
This yields the following results:
ans = hasSumOfTwoOthers([1,3,2,5,3,6])
# Returns True
ans = hasSumOfTwoOthers([1,5,3,5,9,7])
# Returns False
Here I am doing it using C:
An array A[] of n numbers and another number x, determines whether or not there exist two elements in S whose sum is exactly x.
METHOD 1 (Use Sorting)
Algorithm:
hasArrayTwoCandidates (A[], ar_size, sum)
1) Sort the array in non-decreasing order.
2) Initialize two index variables to find the candidate elements in the sorted array.
(a) Initialize first to the leftmost index: l = 0
(b) Initialize second the rightmost index: r = ar_size-1
3) Loop while l < r.
(a) If (A[l] + A[r] == sum) then return 1
(b) Else if( A[l] + A[r] < sum ) then l++
(c) Else r--
4) No candidates in whole array - return 0
Example:
Let Array be {1, 4, 45, 6, 10, -8} and sum to find be 16
Sort the array
A = {-8, 1, 4, 6, 10, 45}
Initialize l = 0, r = 5
A[l] + A[r] ( -8 + 45) > 16 => decrement r. Now r = 10
A[l] + A[r] ( -8 + 10) < 2 => increment l. Now l = 1
A[l] + A[r] ( 1 + 10) < 16 => increment l. Now l = 2
A[l] + A[r] ( 4 + 10) < 14 => increment l. Now l = 3
A[l] + A[r] ( 6 + 10) == 16 => Found candidates (return 1)
Implementation:
# include <stdio.h>
# define bool int
void quickSort(int *, int, int);
bool hasArrayTwoCandidates(int A[], int arr_size, int sum)
{
int l, r;
/* Sort the elements */
quickSort(A, 0, arr_size-1);
/* Now look for the two candidates in the sorted
array*/
l = 0;
r = arr_size-1;
while(l < r)
{
if(A[l] + A[r] == sum)
return 1;
else if(A[l] + A[r] < sum)
l++;
else // A[i] + A[j] > sum
r--;
}
return 0;
}
/* Driver program to test above function */
int main()
{
int A[] = {1, 4, 45, 6, 10, -8};
int n = 16;
int arr_size = 6;
if( hasArrayTwoCandidates(A, arr_size, n))
printf("Array has two elements with sum 16");
else
printf("Array doesn't have two elements with sum 16 ");
getchar();
return 0;
}
/* FOLLOWING FUNCTIONS ARE ONLY FOR SORTING
PURPOSE */
void exchange(int *a, int *b)
{
int temp;
temp = *a;
*a = *b;
*b = temp;
}
int partition(int A[], int si, int ei)
{
int x = A[ei];
int i = (si - 1);
int j;
for (j = si; j <= ei - 1; j++)
{
if(A[j] <= x)
{
i++;
exchange(&A[i], &A[j]);
}
}
exchange (&A[i + 1], &A[ei]);
return (i + 1);
}
/* Implementation of Quick Sort
A[] --> Array to be sorted
si --> Starting index
ei --> Ending index
*/
void quickSort(int A[], int si, int ei)
{
int pi; /* Partitioning index */
if(si < ei)
{
pi = partition(A, si, ei);
quickSort(A, si, pi - 1);
quickSort(A, pi + 1, ei);
}
}
This one is using Hash Set in Java; It is O(n) complexity.
public static void findPair3ProPrint(int[] array, int sum) {
Set<Integer> hs = new HashSet<Integer>();
for (int i : array) {
if (hs.contains(sum-i)) {
System.out.print("(" + i + ", " + (sum-i) + ")" + " ");
}else{
hs.add(i);
}
}
}
An efficient way to do this would be using sorting and then a binary search.
Suppose the two numbers are x and y, x+y=SUM
For each x, search the array for the element SUM-x
Sort the array using mergesort.
Then for each element a[i] in the array a, do a binary search for the element (SUM-x)
This algorithm should work in O(nlgn).
Here, binaryseacrh returns index of the search key if found, else it returns -1.
SIZE is the arraysize
for(int i=0;i<SIZE;i++)
{
int ind=binarysearch(SUM-a[i]);
if(ind>0)
printf("sum=%d + %d\n a[%d] + a[%d]\n"
,a[i],a[ind],i,ind);
}

Range values in C

So I want to solve a problem in C
We have 10 numbers {1,1,8,1,1,3,4,9,5,2} in an array. We break the array into 3 pecies A, B, C.
And wemake the bellow procedure (I prefered to create a small diagram so you can undertand me better). Diagram here
As you see this isn't all the procedure just the start of it.
I created a code but I getting false results. What have I missed?
#define N 10
int sum_array(int* array, int first, int last) {
int res = 0;
for (int i = first ; i <= last ; i++) {
res += array[i];
}
return res;
}
int main(){
int array[N] = {1,1,8,1,1,3,4,9,5,2};
int Min = 0;
for (int A = 1; A < N - 2; A++) {
int ProfitA = sum_array(array, 0 , A-1);
int ProfitB = array[A];
int ProfitC = sum_array(array,A+1,N-1);
for (int B = 1; B < N - 1; B++) {
//here the values are "current" - valid
int temp = (ProfitA < ProfitB) ? ProfitA : ProfitB;
Min = (ProfitC < temp) ? ProfitC : temp;
//Min = std::min(std::min(ProfitA,ProfitB),ProfitC);
if (Min > INT_MAX){
Min = INT_MAX;
}
//and here they are being prepared for the next iteration
ProfitB = ProfitB + array[A+B-1];
ProfitC = ProfitC - array[A+B];
}
}
printf("%d", Min);
return 0;
}
Complexity of program is Ο(n (n+n))=O(n^2 )
To find the number of permutations here is the function : 1+0.5*N*(N-3) where N is the number of elements in the array.*
Here is the first though of the program in pseudocode. Complexity O(n^3)
//initialization, fills salary array
n:= length of salary array
best_min_maximum:=infinity
current_min_maximum:=infinity
best_bound_pos1 :=0
best_bound_pos2 :=0
for i = 0 .. (n-2):
>> for j = (i+1) .. (n-1)
>>>> current_min_maximum = max_bros_profit(salary, i, j)
>>>> if current_min_maximum < best_min_maximum:
>>>>>> best_min_maximum:=current_min_maximum
>>>>>> best_bound_pos1 :=i
>>>>>> best_bound_pos2 :=j
max_bros_profit(profit_array, position_of_bound_1, position_of_bound_2)
so max_bros_profit([8 5 7 9 6 2 1 5], 1(==1st space between days, counted from 0) , 3) is interpreted as:
8 . 5 | 7 . 9 | 6 .2 . 1 . 5 - which returns max sum of [8 5] [7 9] [6 2 1 5] => 14
> ^ - ^ - ^ - ^ - ^ - ^ - ^
> 0 , 1 , 2 , 3 , 4 , 5 , 6
This is my take. It is a greedy algorithm that starts with a maximal B range and then starts chopping off values one after another until the result cannot be improved. It hast complexity O(n).
#include <iostream>
#include <utility>
#include <array>
#include <algorithm>
#include <cassert>
// Splits an array `arr` into three sections A,B,C.
// Returns the indices to the first element of B and C.
// (the first element of A obviously has index 0)
template <typename T, ::std::size_t len>
::std::pair<::std::size_t,::std::size_t> split(T const (& arr)[len]) {
assert(len > 2);
// initialise the starting indices of section A, B, and C
// such that A: {0}, B: {1,...,len-2}, C: {len-1}
::std::array<::std::size_t,3> idx = {0,1,len-1};
// initialise the preliminary sum of all sections
::std::array<T,3> sum = {arr[0],arr[1],arr[len-1]};
for (::std::size_t i = 2; i < len-1; ++i)
sum[1] += arr[i];
// the preliminary maximum
T max = ::std::max({ sum[0], sum[1], sum[2] });
// now we iterate until section B is not empty
while ((idx[1]+1) < idx[2]) {
// in our effort to shrink B, we must decide whether to cut of the
// left-most element to A or the right-most element to C.
// So we figure out what the new sum of A and C would be if we
// did so.
T const left = (sum[0] + arr[idx[1]]);
T const right = (sum[2] + arr[idx[2]-1]);
// We always fill the smaller section first, so if A would be
// smaller than C, we slice an element off to A.
if (left <= right && left <= max) {
// We only have to update the sums to the newly computed value.
// Also we have to move the starting index of B one
// element to the right
sum[0] = left;
sum[1] -= arr[idx[1]++];
// update the maximum section sum
max = ::std::max(sum[1],sum[2]); // left cannot be greater
} else if (right < left && right <= max) {
// Similar to the other case, but here we move the starting
// index of C one to the left, effectively shrinking B.
sum[2] = right;
sum[1] -= arr[--idx[2]];
// update the maximum section sum
max = ::std::max(sum[1],sum[0]); // right cannot be greater
} else break;
}
// Finally, once we're done, we return the first index to
// B and to C, so the caller knows how our partitioning looks like.
return ::std::make_pair(idx[1],idx[2]);
}
It returns the index to the start of the B range and the index to the start of the C range.
This is your pseudocode in C (just for reference because you tagged your problem with C++ yet want a C only solution). Still, the greedy solution that bitmask provided above is a better O(N) solution; you should try to implement that algorithm instead.
#include <stdio.h>
#include <stdint.h>
#include <limits.h>
#define N 10
int sum_array(int* array, int cnt)
{
int res = 0;
int i;
for ( i = 0; i < cnt ; ++i)
res += array[i];
return res;
}
int main()
{
int array[N] = {1,1,8,1,1,3,4,9,5,2};
int Min = 0;
int bestA = 0, bestB = 0, bestMin = INT_MAX;
int A, B;
int i;
for ( A = 0; A < N - 2; ++A)
{
for ( B = A + 1; B < N - 1; ++B)
{
int ProfitA = sum_array(array, A + 1);
int ProfitB = sum_array(array + A + 1, B - A );
int ProfitC = sum_array(array + B + 1, N - 1 - B );
//here the values are "current" - valid
Min = (ProfitA > ProfitB) ? ProfitA : ProfitB;
Min = (ProfitC > Min) ? ProfitC : Min;
if( Min < bestMin )
bestA = A, bestB = B, bestMin = Min;
#if 0
printf( "%2d,%2d or (%3d,%3d,%3d) ", A, B, ProfitA, ProfitB, ProfitC );
for( i = 0; i < N; ++i )
printf( "%d%c", array[i], ( ( i == A ) || ( i == B ) ) ? '|' : ' ' );
printf( " ==> %d\n", Min);
#endif
}
}
printf("%d # %d, %d\n", bestMin, bestA, bestB);
return 0;
}
I made this solution before you removed the [C++] tag so I thought I'd go ahead and post it.
It runs in O(n*n):
const vector<int> foo{ 1, 1, 8, 1, 1, 3, 4, 9, 5, 2 }; // Assumed to be of at least size 3 For pretty printing each element is assumed to be less than 10
map<vector<int>::const_iterator, pair<int, string>> bar; // A map with key: the beginning of the C partition and value: the sum and string of that partition of C
auto mapSum = accumulate(next(foo.cbegin(), 2), foo.cend(), 0); // Find the largest possible C partition sum
auto mapString = accumulate(next(foo.cbegin(), 2), foo.cend(), string(), [](const string& init, int i){return init + to_string(i) + ' ';}); // Find the largest possible C partiont string
for (auto i = next(foo.cbegin(), 2); i < foo.cend(); mapSum -= *i++, mapString.erase(0, 2)){ // Fill the map with all possible C partitions
bar[i] = make_pair(mapSum, mapString);
}
mapSum = foo.front(); // mapSum will be reused for the current A partition sum
mapString = to_string(mapSum); // mapString will be reused for the current A partition string
cout << left;
for (auto aEnd = next(foo.cbegin()); aEnd < foo.cend(); ++aEnd){ // Iterate through all B partition beginings
auto internalSum = *aEnd; // The B partition sum
auto internalString = to_string(internalSum); // The B partition string
for (auto bEnd = next(aEnd); bEnd < foo.cend(); ++bEnd){ // Iterate through all B partition endings.
// print current partitioning
cout << "A: " << setw(foo.size() * 2 - 5) << mapString << " B: " << setw(foo.size() * 2 - 5) << internalString << " C: " << setw(foo.size() * 2 - 4) << bar[bEnd].second << "Max Sum: " << max({ mapSum, internalSum, bar[bEnd].first }) << endl;
internalSum += *bEnd; // Update B partition sum
internalString += ' ' + to_string(*bEnd); // Update B partition string
}
mapSum += *aEnd; // Update A partition sum
mapString += ' ' + to_string(*aEnd); // Update A partition string
}

Optimization of Brute-Force algorithm or Alternative?

I have a simple (brute-force) recursive solver algorithm that takes lots of time for bigger values of OpxCnt variable. For small values of OpxCnt, no problem, works like a charm. The algorithm gets very slow as the OpxCnt variable gets bigger. This is to be expected but any optimization or a different algorithm ?
My final goal is that :: I want to read all the True values in the map array by
executing some number of read operations that have the minimum operation
cost. This is not the same as minimum number of read operations.
At function completion, There should be no True value unread.
map array is populated by some external function, any member may be 1 or 0.
For example ::
map[4] = 1;
map[8] = 1;
1 read operation having Adr=4,Cnt=5 has the lowest cost (35)
whereas
2 read operations having Adr=4,Cnt=1 & Adr=8,Cnt=1 costs (27+27=54)
#include <string.h>
typedef unsigned int Ui32;
#define cntof(x) (sizeof(x) / sizeof((x)[0]))
#define ZERO(x) do{memset(&(x), 0, sizeof(x));}while(0)
typedef struct _S_MB_oper{
Ui32 Adr;
Ui32 Cnt;
}S_MB_oper;
typedef struct _S_MB_code{
Ui32 OpxCnt;
S_MB_oper OpxLst[20];
Ui32 OpxPay;
}S_MB_code;
char map[65536] = {0};
static int opx_ListOkey(S_MB_code *px_kod, char *pi_map)
{
int cost = 0;
char map[65536];
memcpy(map, pi_map, sizeof(map));
for(Ui32 o = 0; o < px_kod->OpxCnt; o++)
{
for(Ui32 i = 0; i < px_kod->OpxLst[o].Cnt; i++)
{
Ui32 adr = px_kod->OpxLst[o].Adr + i;
// ...
if(adr < cntof(map)){map[adr] = 0x0;}
}
}
for(Ui32 i = 0; i < cntof(map); i++)
{
if(map[i] > 0x0){return -1;}
}
// calculate COST...
for(Ui32 o = 0; o < px_kod->OpxCnt; o++)
{
cost += 12;
cost += 13;
cost += (2 * px_kod->OpxLst[o].Cnt);
}
px_kod->OpxPay = (Ui32)cost; return cost;
}
static int opx_FindNext(char *map, int pi_idx)
{
int i;
if(pi_idx < 0){pi_idx = 0;}
for(i = pi_idx; i < 65536; i++)
{
if(map[i] > 0x0){return i;}
}
return -1;
}
static int opx_FindZero(char *map, int pi_idx)
{
int i;
if(pi_idx < 0){pi_idx = 0;}
for(i = pi_idx; i < 65536; i++)
{
if(map[i] < 0x1){return i;}
}
return -1;
}
static int opx_Resolver(S_MB_code *po_bst, S_MB_code *px_wrk, char *pi_map, Ui32 *px_idx, int _min, int _max)
{
int pay, kmax, kmin = 1;
if(*px_idx >= px_wrk->OpxCnt)
{
return opx_ListOkey(px_wrk, pi_map);
}
_min = opx_FindNext(pi_map, _min);
// ...
if(_min < 0){return -1;}
kmax = (_max - _min) + 1;
// must be less than 127 !
if(kmax > 127){kmax = 127;}
// is this recursion the last one ?
if(*px_idx >= (px_wrk->OpxCnt - 1))
{
kmin = kmax;
}
else
{
int zero = opx_FindZero(pi_map, _min);
// ...
if(zero > 0)
{
kmin = zero - _min;
// enforce kmax limit !?
if(kmin > kmax){kmin = kmax;}
}
}
for(int _cnt = kmin; _cnt <= kmax; _cnt++)
{
px_wrk->OpxLst[*px_idx].Adr = (Ui32)_min;
px_wrk->OpxLst[*px_idx].Cnt = (Ui32)_cnt;
(*px_idx)++;
pay = opx_Resolver(po_bst, px_wrk, pi_map, px_idx, (_min + _cnt), _max);
(*px_idx)--;
if(pay > 0)
{
if((Ui32)pay < po_bst->OpxPay)
{
memcpy(po_bst, px_wrk, sizeof(*po_bst));
}
}
}
return (int)po_bst->OpxPay;
}
int main()
{
int _max = -1, _cnt = 0;
S_MB_code best = {0};
S_MB_code work = {0};
// SOME TEST DATA...
map[ 4] = 1;
map[ 8] = 1;
/*
map[64] = 1;
map[72] = 1;
map[80] = 1;
map[88] = 1;
map[96] = 1;
*/
// SOME TEST DATA...
for(int i = 0; i < cntof(map); i++)
{
if(map[i] > 0)
{
_max = i; _cnt++;
}
}
// num of Opx can be as much as num of individual bit(s).
if(_cnt > cntof(work.OpxLst)){_cnt = cntof(work.OpxLst);}
best.OpxPay = 1000000000L; // invalid great number...
for(int opx_cnt = 1; opx_cnt <= _cnt; opx_cnt++)
{
int rv;
Ui32 x = 0;
ZERO(work); work.OpxCnt = (Ui32)opx_cnt;
rv = opx_Resolver(&best, &work, map, &x, -42, _max);
}
return 0;
}
You can use dynamic programming to calculate the lowest cost that covers the first i true values in map[]. Call this f(i). As I'll explain, you can calculate f(i) by looking at all f(j) for j < i, so this will take time quadratic in the number of true values -- much better than exponential. The final answer you're looking for will be f(n), where n is the number of true values in map[].
A first step is to preprocess map[] into a list of the positions of true values. (It's possible to do DP on the raw map[] array, but this will be slower if true values are sparse, and cannot be faster.)
int pos[65536]; // Every position *could* be true
int nTrue = 0;
void getPosList() {
for (int i = 0; i < 65536; ++i) {
if (map[i]) pos[nTrue++] = i;
}
}
When we're looking at the subproblem on just the first i true values, what we know is that the ith true value must be covered by a read that ends at i. This block could start at any position j <= i; we don't know, so we have to test all i of them and pick the best. The key property (Optimal Substructure) that enables DP here is that in any optimal solution to the i-sized subproblem, if the read that covers the ith true value starts at the jth true value, then the preceding j-1 true values must be covered by an optimal solution to the (j-1)-sized subproblem.
So: f(i) = min(f(j) + score(pos(j+1), pos(i)), with the minimum taken over all 1 <= j < i. pos(k) refers to the position of the kth true value in map[], and score(x, y) is the score of a read from position x to position y, inclusive.
int scores[65537]; // We effectively start indexing at 1
scores[0] = 0; // Covering the first 0 true values requires 0 cost
// Calculate the minimum score that could allow the first i > 0 true values
// to be read, and store it in scores[i].
// We can assume that all lower values have already been calculated.
void calcF(int i) {
int bestStart, bestScore = INT_MAX;
for (int j = 0; j < i; ++j) { // Always executes at least once
int attemptScore = scores[j] + score(pos[j + 1], pos[i]);
if (attemptScore < bestScore) {
bestStart = j + 1;
bestScore = attemptScore;
}
}
scores[i] = bestScore;
}
int score(int i, int j) {
return 25 + 2 * (j + 1 - i);
}
int main(int argc, char **argv) {
// Set up map[] however you want
getPosList();
for (int i = 1; i <= nTrue; ++i) {
calcF(i);
}
printf("Optimal solution has cost %d.\n", scores[nTrue]);
return 0;
}
Extracting a Solution from Scores
Using this scheme, you can calculate the score of an optimal solution: it's simply f(n), where n is the number of true values in map[]. In order to actually construct the solution, you need to read back through the table of f() scores to infer which choice was made:
void printSolution() {
int i = nTrue;
while (i) {
for (int j = 0; j < i; ++j) {
if (scores[i] == scores[j] + score(pos[j + 1], pos[i])) {
// We know that a read can be made from pos[j + 1] to pos[i] in
// an optimal solution, so let's make it.
printf("Read from %d to %d for cost %d.\n", pos[j + 1], pos[i], score(pos[j + 1], pos[i]));
i = j;
break;
}
}
}
}
There may be several possible choices, but all of them will produce optimal solutions.
Further Speedups
The solution above will work for an arbitrary scoring function. Because your scoring function has a simple structure, it may be that even faster algorithms can be developed.
For example, we can prove that there is a gap width above which it is always beneficial to break a single read into two reads. Suppose we have a read from position x-a to x, and another read from position y to y+b, with y > x. The combined costs of these two separate reads are 25 + 2 * (a + 1) + 25 + 2 * (b + 1) = 54 + 2 * (a + b). A single read stretching from x-a to y+b would cost 25 + 2 * (y + b - x + a + 1) = 27 + 2 * (a + b) + 2 * (y - x). Therefore the single read costs 27 - 2 * (y - x) less. If y - x > 13, this difference goes below zero: in other words, it can never be optimal to include a single read that spans a gap of 12 or more.
To make use of this property, inside calcF(), final reads could be tried in decreasing order of start-position (i.e. in increasing order of width), and the inner loop stopped as soon as any gap width exceeds 12. Because that read and all subsequent wider reads tried would contain this too-large gap and therefore be suboptimal, they need not be tried.

Resources