Xor of all pairwise sums of integers in an array - arrays

We have an array A for example [1, 2, 3]. I want to find the XOR of the SUM of all pairs of integers in the array.
Though this can easily be done in O(n^2) (where n is the size of the array) by passing over all of the pairs, I want to improve the time complexity of the solution? Any answer that improves the time complexity would be great.
E.g. for the above example array, A, the answer would be (1+2)^(1+3)^(2+3) = 2. Since the pairwise elements are (1,2), (1,3), (2,3), and 3 ^ 4 ^ 5 = 2.

Here's an idea for a solution in O(nw) time, where w is the size of a machine word (generally 64 or some other constant). The most important thing is counting how many of the pairs will have a particular bit set, and the parity of this number determines whether that bit will be set in the result. The goal is to count that in O(n) time instead of O(n2).
Finding the right-most bit of the result is easiest. Count how many of the input numbers have a 0 in the right-most place (i.e. how many are even), and how many have a 1 there (i.e. how many are odd). The number of pairs whose sum has a 1 in the rightmost place equals the product of those two counts, since a pair must have one odd and one even number for its sum to be odd. The result has a 1 in the rightmost position if and only if this product is odd.
Finding the second-right-most bit of the result is a bit harder. We can do the same trick of counting how many elements do and don't have a 1 there, then taking the product of those counts; but we also need to count how many 1 bits are carried into the second place from sums where both numbers had a 1 in the first place. Fortunately, we can compute this using the count from the previous stage; it is the number of pairs given by the formula k*(k-1)/2 where k is the count of those with a 1 bit in the previous place. This can be added to the product in this stage to determine how many 1 bits there are in the second place.
Each stage takes O(n) time to count the elements with a 0 or 1 bit in the appropriate place. By repeating this process w times, we can compute all w bits of the result in O(nw) time. I will leave the actual implementation of this to you.

Here's my understanding of at least one author's intention for an O(n * log n * w) solution, where w is the number of bits in the largest sum.
The idea is to examine the contribution of each bit one a time. Since we are only interested in whether the kth bit in the sums is set in any one iteration, we can remove all parts of the numbers that include higher bits, taking them each modulo 2^(k + 1).
Now the sums that would necessarily have the kth bit set lie in the intervals, [2^k, 2^(k + 1)) and [2^(k+1) + 2^k, 2^(k+2) − 2]. So we sort the input list (modulo 2^(k + 1)), and for each left summand, we decrement a pointer to the end of each of the two intervals, and binary search the relevant start index.
Here's JavaScript code with a random comparison to brute force to show that it works (easily translatable to C or Python):
// https://stackoverflow.com/q/64082509
// Returns the lowest index of a value
// greater than or equal to the target
function lowerIdx(a, val, left, right){
if (left >= right)
return left;
mid = left + ((right - left) >> 1);
if (a[mid] < val)
return lowerIdx(a, val, mid+1, right);
else
return lowerIdx(a, val, left, mid);
}
function bruteForce(A){
let answer = 0;
for (let i=1; i<A.length; i++)
for (let j=0; j<i; j++)
answer ^= A[i] + A[j];
return answer;
}
function f(A, W){
const n = A.length;
const _A = new Array(n);
let result = 0;
for (let k=0; k<W; k++){
for (let i=0; i<n; i++)
_A[i] = A[i] % (1 << (k + 1));
_A.sort((a, b) => a - b);
let pairs_with_kth_bit = 0;
let l1 = 1 << k;
let r1 = 1 << (k + 1);
let l2 = (1 << (k + 1)) + (1 << k);
let r2 = (1 << (k + 2)) - 2;
let ptr1 = n - 1;
let ptr2 = n - 1;
for (let i=0; i<n-1; i++){
// Interval [2^k, 2^(k+1))
while (ptr1 > i+1 && _A[i] + _A[ptr1] >= r1)
ptr1 -= 1;
const idx1 = lowerIdx(_A, l1-_A[i], i+1, ptr1);
let sum = _A[i] + _A[idx1];
if (sum >= l1 && sum < r1)
pairs_with_kth_bit += ptr1 - idx1 + 1;
// Interval [2^(k+1)+2^k, 2^(k+2)−2]
while (ptr2 > i+1 && _A[i] + _A[ptr2] > r2)
ptr2 -= 1;
const idx2 = lowerIdx(_A, l2-_A[i], i+1, ptr2);
sum = _A[i] + _A[idx2]
if (sum >= l2 && sum <= r2)
pairs_with_kth_bit += ptr2 - idx2 + 1;
}
if (pairs_with_kth_bit & 1)
result |= 1 << k;
}
return result;
}
var As = [
[1, 2, 3], // 2
[1, 2, 10, 11, 18, 20], // 50
[10, 26, 38, 44, 51, 70, 59, 20] // 182
];
for (let A of As){
console.log(JSON.stringify(A));
console.log(`DP, brute force: ${ f(A, 10) }, ${ bruteForce(A) }`);
console.log('');
}
var numTests = 500;
for (let i=0; i<numTests; i++){
const W = 8;
const A = [];
const n = 12;
for (let j=0; j<n; j++){
const num = Math.floor(Math.random() * (1 << (W - 1)));
A.push(num);
}
const fA = f(A, W);
const brute = bruteForce(A);
if (fA != brute){
console.log('Mismatch:');
console.log(A);
console.log(fA, brute);
console.log('');
}
}
console.log("Done testing.");

Related

Find the kth lowest sum of unique pair in an array

Given an array of numbers, each number represents the difficulty of a problem. People standing in a line should choose any two problems to solve. The two chosen problems should be different and the pair of problems should not be picked by anyone previously. Since they know the difficulties they'll choose the pair whose sum of difficulties is minimum.
Find the minimum sum of difficulties for the person whose standing in kth position in the line. i.e kth minimum sum of unique pair from an array.
Approach 1: Brute force approach(O(n2)) to calculate all possible unique sum and store that in an array and sorted the unique sum array to get the kth element.
Approach 2: Sort the difficulties array and choose the minimal elements(for first 4 elements we can have 6 unique pairs. so if k is less than or equal to 6 we can use first 4 elements in the sorted array to find the minimum sum) and did the approach 1 with the minimal array.
These 2 approaches did not solve the timeout cases. Need a solution with improved time efficiency.
Note: Different problem can have same difficulty level(i.e. array can contain duplicate numbers) also and not in sorted order by default.
difficulties = [1,4,3,2,4]
Person comes first chooses: 1+2 = 3
2nd person: 1+3 = 4
3rd person: 1+4 (or) 1+4(since difficulty of two problems are 4) (or) 2+3 = 5
4th person: 2+3 (or) 1+4(based on the previous selection) = 5
Final answer needed is only the minimum sum not the actual elements.
Assume the constraints to be:
2 <= N <= 105
1 <= k <= N*(N-1)/2
1 <= difficulties[i] <= 109
where,
N is the length of the array
k is the position in which the person has to choose the problems
Assuming k <= n*(n-1)/2. If not, then no answer possible.
We can use binary search to solve the problem. We binary search on the possible sum of pairs.
Here, low = minimum sum possible i.e. low = difficulties[0] + difficulties[1], and high = maximum sum possible i.e. high = difficulties[n-1] + difficulties[n-2].
So, mid = low + (high - low)/2
Now, in 1 iteration of binary search we would count the pairs of indices (i, j), i < j such that difficulties[i] + difficulties[j] <= mid. If the count is less than k, low = mid + 1 else if count >= k, high = mid. Now, this one iteration can be done in O(NlogN).
You can do this till (high - low) > 1. So, each time you reduce your search space by half. So, total time complexity would be O(N*logN*logMaxsum) which for N <= 1e6 and difficulties[i] <= 1e18 would run in less than 1s.
Now high can be equal to low or high can be equal to low +1. The answer can be equal to low or high. Now, you just need to solve the problem whether low is a possible sum(can be solved easily in O(N) using Hashing) and no. of pairs of indices (i, j), i < j such that difficulties[i] + difficulties[j] <= low. If both conditions satify then this is your answer. If not then high is the answer.
Running an example testcase:
Lets' consider the initial array, difficulties = [1, 4, 3, 2, 4] and k = 6.
You first sort the array costing us O(NlogN). After sorting difficulties = [1, 2, 3, 4, 4]
All the pairs n*(n-1)/2 = 10 would be:
(1 + 2) => 3
(1 + 3) => 4
(1 + 4) => 5
(1 + 4) => 5
(2 + 3) => 5
(2 + 4) => 6
(2 + 4) => 6
(3 + 4) => 7
(3 + 4) => 7
(4 + 4) => 8
This is more of a pseudocode to understand the running of the logic.
sort(difficulties)
low = difficulties[0] + difficulties[1] // Minimum possible sum
high = difficulties[n-1] + difficulties[n-2] // Maximum possible sum
while(high - low > 1){
mid = low + (high - low)/2
count = all pairs (i, j) and i < j such that difficulties[i] + difficulties[j] <= mid.
if(count < k){
low = mid +1
}else{
high = mid
}
}
Iteration 1:
low = 3
high = 8
mid = 5
count = 5 [(1 + 2), (1 + 3), (1 + 4), (1 + 4), (2 + 3)]
count < k, so low = mid + 1 = 6
----------
Iteration 2:
low = 6
high = 8
mid = 7
count = 9 [(1 + 2), (1 + 3), (1 + 4), (1 + 4), (2 + 3), (2 + 4), (2 + 4), (3 + 4), (3 + 4)]
count >= k, so high= mid = 7
Now, while loop stops since high(7) - low(6) = 1.
Now, you need to check if sum 6 is possible and count of all (i, j) >= k. if it is then low is the answer and in this case it is true. So, answer = 6 for k = 6.
To implement the count thing, you can again do a binary search. Choose first index as i then you just need to find the upper bound of mid - difficulties[i] in the array [i+1, n-1]. Then increment i by 1 and repeat the same. So, you go over every index 0 <= i <= n-1 and find its upper bound in the array search space of [i+1, n-1] and this each iteration takes O(NlogN).
To see why is the last step of checking if low or high is a possible sum or not, try running the algorithm for the array difficulties = [10, 40, 30, 20, 40].
UPDATE:
Below is the complete working code with the time complexity of O(N*logN*logMaxsum) including comments for clear understanding of the logic.
#include<bits/stdc++.h>
#define ll long long int
using namespace std;
void solve();
int main(){
solve();
return 0;
}
map<int, int> m;
vector<ll> difficulties;
ll countFunction(ll sum){
/*
Function to count all the pairs of indices (i, j) such that
i < j and (difficulties[i] + difficulties[j]) <= sum
*/
ll count = 0;
int n = (int)difficulties.size();
for(int i=0;i<n-1;i++){
/*
Here the outer for loop means that if I choose difficulties[i]
as the first element of the pair, then the remaining sum is
m - difficulties[i], so we just need to find the upper_bound of this value
to find the count of all pairs with sum <= m.
upper_bound is an in-built function in C++ STL.
*/
int x= upper_bound(difficulties.begin(), difficulties.end(), sum-difficulties[i]) - (difficulties.begin() + i + 1);
if(x<=0){
/*
We break here because the condition of i < j is violated
and it will be violated for remaining values of i as well.
*/
break;
}
//cout<<"x = "<<x<<endl;
count += x;
}
return count;
}
bool isPossible(ll sum){
/*
Hashing based solution to check if atleast 1 pair with
a particular exists in the difficultiesay.
*/
int n = (int) difficulties.size();
for(int i=0;i<n;i++){
/*
Choosing the ith element as first element of pair
and checking if there exists an element with value = sum - difficulties[i]
*/
if(difficulties[i] == (sum - difficulties[i])){
// If the elements are equal then the frequency must be > 1
if(m[difficulties[i]] > 1){
return true;
}
}else{
if(m[sum - difficulties[i]] > 0){
return true;
}
}
}
return false;
}
void solve(){
ll i, j, n, k;
cin>>n>>k;
difficulties.resize(n);
m.clear(); // to run multiple test-cases
for(i=0;i<n;i++){
cin>>difficulties[i];
m[difficulties[i]]++;
}
sort(difficulties.begin(), difficulties.end());
// Using binary search on the possible values of sum.
ll low = difficulties[0] + difficulties[1]; // Lowest possible sum after sorting
ll high = difficulties[n-1] + difficulties[n-2]; // Highest possible sum after sorting
while((high-low)>1){
ll mid = low + (high - low)/2;
ll count = countFunction(mid);
//cout<<"Low = "<<low<<" high = "<<high<<" mid = "<<mid<<" count = "<<count<<endl;
if (k > count){
low = mid + 1;
}else{
high = mid;
}
}
/*
Now the answer can be low or high and we need to check
if low or high is a possible sum and does it satisfy the constraints of k.
For low to be the answer, we need to count the number of pairs with sum <=low.
If this count is >=k, then low is the answer.
But we also need to check whether low is a feasible sum.
*/
if(isPossible(low) && countFunction(low)>=k){
cout<<low<<endl;
}else{
cout<<high<<endl;
}
}

Minimum number of operations to make pair sums of array equal

You are given a list of integers nums of even length. Consider an operation where you pick any number in nums and update it with a value between [1, max(nums)]. Return the number of operations required such that for every i, nums[i] + nums[n - 1 - i] equals to the same number. The problem can be solved greedily.
Note: n is the size of the array and max(nums) is the maximum element in nums.
For example: nums = [1,5,4,5,9,3] the expected operations are 2.
Explanation: The maxnums is 9, so I can change any element of nums to any number between [1, 9] which costs one operation.
Choose 1 at index 0 and change it to 6
Choose 9 at index 4 and change it to 4.
Now this makes the nums[0] + nums[5] = nums[1] + nums[4] = nums[2] + nums[3] = 9. We had changed 2 numbers and it cost us 2 operations which is the minimum for this input.
The approach that I've used is to find the median of the sums and use that to find the number of operations greedily.
Let us find the all the sums of the array based on the given condition.
Sums can be calculated by nums[i] + nums[n-1-i].
Let i = 0, nums[0] + nums[6-1-0] = 4.
i = 1, nums[1] + nums[6-1-1] = 14.
i = 2, nums[2] + nums[6-1-2] = 9.
Store these sums in an array and sort it.
sums = [4,9,14] after sorting. Now find the median from sums which is 9 as it is the middle element.
Now I use this median to equalize the sums and we can find the number of operations. I've also added the code that I use to calculate the number of operations.
int operations = 0;
for(int i=0; i<nums.size()/2; i++) {
if(nums[i] + nums[nums.size()-1-i] == mid)
continue;
if(nums[i] + nums[nums.size()-1-i] > mid) {
if(nums[i] + 1 <= mid || 1 + nums[nums.size()-1-i] <= mid) {
operations++;
} else {
operations += 2;
}
} else if (maxnums + nums[nums.size()-1-i] >= mid || nums[i] + maxnums >= mid) {
operations++;
} else {
operations += 2;
}
}
The total operations for this example is 2 which is correct.
The problem here is that, for some cases choosing the median gives the wrong result. For example, the nums = [10, 7, 2, 9, 4, 1, 7, 3, 10, 8] expects 5 operations but my code gives 6 if the median (16) was chosen.
Is choosing the median not the most optimal approach? Can anyone help provide a better approach?
I think the following should work:
iterate pairs of numbers
for each pair, calculate the sum of that pair, as well as the min and max sum that can be achieved by changing just one of the values
update a dictionary/map with -1 when starting a new "region" requiring one fewer change, and +1 when that region is over
iterate the boundaries in that dictionary and update the total changes needed to find the sum that requires the fewest updates
Example code in Python, giving 9 as the best sum for your example, requiring 5 changes.
from collections import defaultdict
nums = [10, 7, 2, 9, 4, 1, 7, 3, 10, 8]
m = max(nums)
pairs = [(nums[i], nums[-1-i]) for i in range(len(nums)//2)]
print(pairs)
score = defaultdict(int)
for a, b in map(sorted, pairs):
low = a + 1
high = m + b
score[low] -= 1
score[a+b] -= 1
score[a+b+1] += 1
score[high+1] += 1
print(sorted(score.items()))
cur = best = len(nums)
num = None
for i in sorted(score):
cur += score[i]
print(i, cur)
if cur < best:
best, num = cur, i
print(best, num)
The total complexity of this should be O(nlogn), needing O(n) to create the dictionary, O(nlogn) for sorting, and O(n) for iterating the sorted values in that dictionary. (Do not use an array or the complexity could be much higher if max(nums) >> len(nums))
(UPDATED receiving additional information)
The optimal sum must be one of the following:
a sum of a pair -> because you can keep both numbers of that pair
the min value of a pair + 1 -> because it is the smallest possible sum you only need to change 1 of the numbers for that pair
the max value of a pair + the max overall value -> because it is the largest possible sum you only need to change 1 of the numbers for that pair
Hence, there are order N possible sums.
The total number of operations for this optimal sum can be calculated in various ways.
The O(N²) is quite trivial. And you can implement it quite easily if you want to confirm other solutions work.
Making it O(N log N)
getting all possible optimal sums O(N)
for each possible sum you can calculate occ the number of pairs having that exact sum and thus don't require any manipulation. O(N)
For all other pairs you just need to know if it requires 1 or 2 operations to get to that sum. Which is 2 when it is either impossible if the smallest of the pair is too big to reach sum with the smallest possible number or when the largest of the pair is too small to reach the sum with the largest possible number. Many data structures could be used for that (BIT, Tree, ..). I just used a sorted list and applied binary search (not exhaustively tested though). O(N log N)
Example solution in java:
int[] nums = new int[] {10, 7, 2, 9, 4, 1, 7, 3, 10, 8};
// preprocess pairs: O(N)
int min = 1
, max = nums[0];
List<Integer> minList = new ArrayList<>();
List<Integer> maxList = new ArrayList<>();
Map<Integer, Integer> occ = new HashMap<>();
for (int i=0;i<nums.length/2;i++) {
int curMin = Math.min(nums[i], nums[nums.length-1-i]);
int curMax = Math.max(nums[i], nums[nums.length-1-i]);
min = Math.min(min, curMin);
max = Math.max(max, curMax);
minList.add(curMin);
maxList.add(curMax);
// create all pair sums
int pairSum = nums[i] + nums[nums.length-1-i];
int currentOccurences = occ.getOrDefault(pairSum, 0);
occ.put(pairSum, currentOccurences + 1);
}
// sorting 0(N log N)
Collections.sort(minList);
Collections.sort(maxList);
// border cases
for (int a : minList) {
occ.putIfAbsent(a + max, 0);
}
for (int a : maxList) {
occ.putIfAbsent(a + min, 0);
}
// loop over all condidates O(N log N)
int best = (nums.length-2);
int med = max + min;
for (Map.Entry<Integer, Integer> entry : occ.entrySet()) {
int sum = entry.getKey();
int count = entry.getValue();
int requiredChanges = (nums.length / 2) - count;
if (sum > med) {
// border case where max of pair is too small to be changed to pair of sum
requiredChanges += countSmaller(maxList, sum - max);
} else if (sum < med) {
// border case where having a min of pair is too big to be changed to pair of sum
requiredChanges += countGreater(minList, sum - min);
}
System.out.println(sum + " -> " + requiredChanges);
best = Math.min(best, requiredChanges);
}
System.out.println("Result: " + best);
}
// O(log N)
private static int countGreater(List<Integer> list, int key) {
int low=0, high=list.size();
while(low < high) {
int mid = (low + high) / 2;
if (list.get(mid) <= key) {
low = mid + 1;
} else {
high = mid;
}
}
return list.size() - low;
}
// O(log N)
private static int countSmaller(List<Integer> list, int key) {
int low=0, high=list.size();
while(low < high) {
int mid = (low + high) / 2;
if (list.get(mid) < key) {
low = mid + 1;
} else {
high = mid;
}
}
return low;
}
Just to offer some theory -- we can easily show that the upper bound for needed changes is n / 2, where n is the number of elements. This is because each pair can be made in one change to anything between 1 + C and max(nums) + C, where C is any of the two elements in a pair. For the smallest C, we can bind max(nums) + 1 at the highest; and for the largest C, we can bind 1 + max(nums) at the lowest.
Since those two bounds at the worst cases are equal, we are guaranteed there is some solution with at most N / 2 changes that leaves at least one C (array element) unchanged.
From that we conclude that an optimal solution either (1) has at least one pair where neither element is changed and the rest require only one change per pair, or (2) our optimal solution has n / 2 changes as discussed above.
We can therefore proceed to test each existing pair's single or zero change possibilities as candidates. We can iterate over a sorted list of two to three possibilities per pair, labeled with each cost and index. (Other authors on this page have offered similar ways and code.)

Search unsorted array for 3 elements which sum to a value

I am trying to make an algorithm, of Θ( n² ).
It accepts an unsorted array of n elements, and an integer z,
and has to return 3 indices of 3 different elements a,b,c ; so a+b+c = z.
(return NILL if no such integers were found)
I tried to sort the array first, in two ways, and then to search the sorted array.
but since I need a specific running time for the rest of the algorithm, I am getting lost.
Is there any way to do it without sorting? (I guess it does have to be sorted) either with or without sorting would be good.
example:
for this array : 1, 3, 4, 2, 6, 7, 9 and the integer 6
It has to return: 0, 1, 3
because ( 1+3+2 = 6)
Algorithm
Sort - O(nlogn)
for i=0... n-1 - O(1) assigning value to i
new_z = z-array[i] this value is updated each iteration. Now, search for new_z using two pointers, at begin (index 0) and end (index n-1) If sum (array[ptr_begin] + array[ptr_ens]) is greater then new_z, subtract 1 from the pointer at top. If smaller, add 1 to begin pointer. Otherwise return i, current positions of end and begin. - O(n)
jump to step 2 - O(1)
Steps 2, 3 and 4 cost O(n^2). Overall, O(n^2)
C++ code
#include <iostream>
#include <vector>
#include <algorithm>
int main()
{
std::vector<int> vec = {3, 1, 4, 2, 9, 7, 6};
std::sort(vec.begin(), vec.end());
int z = 6;
int no_success = 1;
//std::for_each(vec.begin(), vec.end(), [](auto const &it) { std::cout << it << std::endl;});
for (int i = 0; i < vec.size() && no_success; i++)
{
int begin_ptr = 0;
int end_ptr = vec.size()-1;
int new_z = z-vec[i];
while (end_ptr > begin_ptr)
{
if(begin_ptr == i)
begin_ptr++;
if (end_ptr == i)
end_ptr--;
if ((vec[begin_ptr] + vec[end_ptr]) > new_z)
end_ptr--;
else if ((vec[begin_ptr] + vec[end_ptr]) < new_z)
begin_ptr++;
else {
std::cout << "indices are: " << end_ptr << ", " << begin_ptr << ", " << i << std::endl;
no_success = 0;
break;
}
}
}
return 0;
}
Beware, result is the sorted indices. You can maintain the original array, and then search for the values corresponding to the sorted array. (3 times O(n))
The solution for the 3 elements which sum to a value (say v) can be done in O(n^2), where n is the length of the array, as follows:
Sort the given array. [ O(nlogn) ]
Fix the first element , say e1. (iterating from i = 0 to n - 1)
Now we have to find the sum of 2 elements sum to a value (v - e1) in range from i + 1 to n - 1. We can solve this sub-problem in O(n) time complexity using two pointers where left pointer will be pointing at i + 1 and right pointer will be pointing at n - 1 at the beginning. Now we will move our pointers either from left or right depending upon the total current sum is greater than or less than required sum.
So, overall time complexity of the solution will be O(n ^ 2).
Update:
I attached solution in c++ for the reference: (also, added comments to explain time complexity).
vector<int> sumOfthreeElements(vector<int>& ar, int v) {
sort(ar.begin(), ar.end());
int n = ar.size();
for(int i = 0; i < n - 2 ; ++i){ //outer loop runs `n` times
//for every outer loop inner loops runs upto `n` times
//therefore, overall time complexity is O(n^2).
int lo = i + 1;
int hi = n - 1;
int required_sum = v - ar[i];
while(lo < hi) {
int current_sum = ar[lo] + ar[hi];
if(current_sum == required_sum) {
return {i, lo, hi};
} else if(current_sum > required_sum){
hi--;
}else lo++;
}
}
return {};
}
I guess this is similar to LeetCode 15 and 16:
LeetCode 16
Python
class Solution:
def threeSumClosest(self, nums, target):
nums.sort()
closest = nums[0] + nums[1] + nums[2]
for i in range(len(nums) - 2):
j = -~i
k = len(nums) - 1
while j < k:
summation = nums[i] + nums[j] + nums[k]
if summation == target:
return summation
if abs(summation - target) < abs(closest - target):
closest = summation
if summation < target:
j += 1
elif summation > target:
k -= 1
return closest
Java
class Solution {
public int threeSumClosest(int[] nums, int target) {
Arrays.sort(nums);
int closest = nums[0] + nums[nums.length >> 1] + nums[nums.length - 1];
for (int first = 0; first < nums.length - 2; first++) {
int second = -~first;
int third = nums.length - 1;
while (second < third) {
int sum = nums[first] + nums[second] + nums[third];
if (sum > target)
third--;
else
second++;
if (Math.abs(sum - target) < Math.abs(closest - target))
closest = sum;
}
}
return closest;
}
}
LeetCode 15
Python
class Solution:
def threeSum(self, nums):
res = []
nums.sort()
for i in range(len(nums) - 2):
if i > 0 and nums[i] == nums[i - 1]:
continue
lo, hi = -~i, len(nums) - 1
while lo < hi:
tsum = nums[i] + nums[lo] + nums[hi]
if tsum < 0:
lo += 1
if tsum > 0:
hi -= 1
if tsum == 0:
res.append((nums[i], nums[lo], nums[hi]))
while lo < hi and nums[lo] == nums[-~lo]:
lo += 1
while lo < hi and nums[hi] == nums[hi - 1]:
hi -= 1
lo += 1
hi -= 1
return res
Java
class Solution {
public List<List<Integer>> threeSum(int[] nums) {
Arrays.sort(nums);
List<List<Integer>> res = new LinkedList<>();
for (int i = 0; i < nums.length - 2; i++) {
if (i == 0 || (i > 0 && nums[i] != nums[i - 1])) {
int lo = -~i, hi = nums.length - 1, sum = 0 - nums[i];
while (lo < hi) {
if (nums[lo] + nums[hi] == sum) {
res.add(Arrays.asList(nums[i], nums[lo], nums[hi]));
while (lo < hi && nums[lo] == nums[-~lo])
lo++;
while (lo < hi && nums[hi] == nums[hi - 1])
hi--;
lo++;
hi--;
} else if (nums[lo] + nums[hi] < sum) {
lo++;
} else {
hi--;
}
}
}
}
return res;
}
}
Reference
You can see the explanations in the following links:
LeetCode 15 - Discussion Board
LeetCode 16 - Discussion Board
LeetCode 15 - Solution
You can use something like:
def find_3sum_restr(items, z):
# : find possible items to consider -- O(n)
candidates = []
min_item = items[0]
for i, item in enumerate(items):
if item < z:
candidates.append(i)
if item < min_item:
min_item = item
# : find possible couples to consider -- O(n²)
candidates2 = []
for k, i in enumerate(candidates):
for j in candidates[k:]:
if items[i] + items[j] <= z - min_item:
candidates2.append([i, j])
# : find the matching items -- O(n³)
for i, j in candidates2:
for k in candidates:
if items[i] + items[j] + items[k] == z:
return i, j, k
This O(n + n² + n³), hence O(n³).
While this is reasonably fast for randomly distributed inputs (perhaps O(n²)?), unfortunately, in the worst case (e.g. for an array of all ones, with a z > 3), this is no better than the naive approach:
def find_3sum_naive(items, z):
n = len(items)
for i in range(n):
for j in range(i, n):
for k in range(j, n):
if items[i] + items[j] + items[k] == z:
return i, j, k

Find the number of triplets i,j,k in an array such that the xor of elements indexed i to j-1 is equal to the xor of elements indexed j to k

For a given sequence of positive integers A1,A2,…,AN, you are supposed to find the number of triplets (i,j,k) such that Ai^Ai+1^..^Aj-1=Aj^Aj+1^..Ak
where ^ denotes bitwise XOR.
The link to the question is here: https://www.codechef.com/AUG19B/problems/KS1
All I did is try to find all subarrays with xor 0. The solution works but is quadratic time and thus too slow.
This is the solution that I managed to get to.
for (int i = 0; i < arr.length; i++) {
int xor = arr[i];
for (int j = i + 1; j < arr.length; j++) {
xor ^= arr[j];
if (xor == 0) {
ans += (j - i);
}
}
}
finAns.append(ans + "\n");
Here's an O(n) solution based on CiaPan's comment under the question description:
If xor of items at indices I through J-1 equals that from J to K, then xor from I to K equals zero. And for any such subarray [I .. K] every J between I+1 and K-1 makes a triplet satisfying the requirements. And xor from I to K equals (xor from 0 to K) xor (xor from 0 to I-1). So I suppose you might find xor-s of all possible initial parts of the sequence and look for equal pairs of them.
The function f is the main method. brute_force is used for validation.
Python 2.7 code:
import random
def brute_force(A):
res = 0
for i in xrange(len(A) - 1):
left = A[i]
for j in xrange(i + 1, len(A)):
if j > i + 1:
left ^= A[j - 1]
right = A[j]
for k in xrange(j, len(A)):
if k > j:
right ^= A[k]
if left == right:
res += 1
return res
def f(A):
ps = [A[0]] + [0] * (len(A) - 1)
for i in xrange(1, len(A)):
ps[i] = ps[i- 1] ^ A[i]
res = 0
seen = {0: (-1, 1, 0)}
for i in xrange(len(A)):
if ps[i] in seen:
prev_i, i_count, count = seen[ps[i]]
new_count = count + i_count * (i - prev_i) - 1
res += new_count
seen[ps[i]] = (i, i_count + 1, new_count)
else:
seen[ps[i]] = (i, 1, 0)
return res
for i in xrange(100):
A = [random.randint(1, 10) for x in xrange(200)]
f_A, brute_force_A = f(A), brute_force(A)
assert f_A == brute_force_A
print "Done"

Find array indices of values greater than given distance in O(n)

Given an array with indices 1 ... n and corresponding values x1 ... xn and the values are sorted in increasing order, left f(i) be the first index (smaller than i) of the greatest value (from the right to the left) with |xi - xf(i)| > 5. For example given (x1, x2, x3, x4) = (6, 7, 12, 14) then f(4) = 2 because the index of the greatest value (from the right to the left) with a distance greater than 5 is 7 (as 14-12 would only be a distance of 2), f(3) = 1 and f(2),f(1) are undefiened (no cycles/modulo because index f(i) must be smaller than i).
I'm searching for an algorithm that finds f(n),f(n-1), ..., f(1) alltogether in time complexity O(n).
The naive version has O(n2) only.
The observation that leads to O(n) time algorithm is that for i < j where both f(i) and f(j) are defined, f(i) <= f(j), i.e., f is an (non-strictly) increasing function. Therefore, when for an index i we find f(i), f(i-1) has to be either equal to or smaller than f(i).
Here is a piece of code that captures the essence of the algorithm.
for (int i = 1; i <= n; i++)
f[i] = -1; //initialize to undefined
int j = n-1;
int i = n;
while (i > 0 && j > 0) {
while ((j > 0) && (x[i] - x[j] <= 5))
j--;
if (x[i] - x[j] > 5)
f[i] = j;
i--;
}

Resources