This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Reordering of array elements
In given array of elements like [a1,a2,a3,..an,b1,b2,b3,..bn,c1,c2,c3,...cn] Write a program to merge them like [a1,b1,c1,a2,b2,c2,...an,bn,cn].
We have to do it in O(1) extra space.
Sample Testcases:
Input #00:
{1,2,3,4,5,6,7,8,9,10,11,12}
Output #00:
{1,5,9,2,6,10,3,7,11,4,8,12}
Explanation:
Here as you can notice, the array is of the form
{a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4}
EDIT:
I got it in Amazon placement test. Have been trying it for a long time.
PLease provide psuedo code. What i tried is finding new position p for second element e(1st is already at correct position), inserting e at p and repeating the same for the old element at position p. But this is ending in a cycle.
I tried detecting cycle and incrementing the starting position by 1. But even this is not working.
EDIT2:
#include <iostream>
using namespace std;
int pos(int i, int n)
{
if(i<n)
{
return 3*i;
}
else if(i>=n && i<2*n)
{
return 3*(i-n) + 1;
}
else if(i>=2*n && i<3*n)
{
return 3*(i-2*n) + 2;
}
return -1;
}
void printn(int* A, int n)
{
for(int i=0;i<3*n;i++)
cout << A[i]<<";";
cout << endl;
}
void merge(int A[], int n)
{
int j=1;
int k =-1;
int oldAj = A[1];
int count = 0;
int temp;
while(count<3*n-1){
printn(A,n);
k = pos(j,n);
temp = A[k];
A[k] = oldAj;
oldAj = temp;
j = k;
count++;
if(j==1) {j++;}
}
}
int main()
{
int A[21] = {1,4,7,10,13,16,19,2,5,8,11,14,17,20,3,6,9,12,15,18,21};
merge(A,7);
cin.get();}
This is the so called in-place in-shuffle algorithm, and it's an extremely hard task if you want to do it efficiently. I'm just posting this entry so people don't post their so called "solutions" claiming that it can be extended to work with O(1) space, without any proof...
Here is a paper for a simpler case when the list is in the form: a1 a2 a3 ... an b1 b2 b3 .. bn:
http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf
Here's is a description of an algorithm with 3 elements of extra space and O(n^2) complexity:
sa, sb, sc are, respectively, next source index for a, b and c sequences.
d is the copy destination index.
On each iterarion:
Copy elements at sa, sb and sc to temporary storage
Shift the array elements to the left to fill in the now vacant indices sa, sb and sc
This leaves three empty positions at d
Copy the three elements from temporary storage to empty positions.
Example (dots indicate "empty" positions):
First iteration:
copy to tmp: ., 2, 3, 4, ., 6, 7, 8, .,10,11,12
1 5 9
shift: ., ., ., 2, 3, 4, 6, 7, 8,10,11,12
copy to dst: 1, 5, 9, 2, 3, 4, 6, 7, 8,10,11,12
Second iteration:
copy to tmp: 1, 5, 9, ., 3, 4, ., 7, 8, .,11,12
2 6 10
shift: 1, 5, 9, ., ., ., 3, 4, 7, 8,11,12
copy to dst: 1, 5, 9, 2, 6,10, 3, 4, 7, 8,11,12
Third iteration:
copy to tmp: 1, 5, 9, 2, 6,10, ., 4, ., 8, .,12
3 7 11
shift: 1, 5, 9, 2, 6,10, ., ., ., 4, 8,12
copy to dst: 1, 5, 9, 2, 6,10, 3, 7 11, 4, 8,12
EDIT:
And here's a working program (it takes a bit more than a verbal description :)))
#include <stdio.h>
#define N 4
int a[] = {1, 2,3, 4, 5, 6, 7, 8, 9, 10, 11, 12};
void
rearrange ()
{
int i;
int d;
int sa, sb, sc;
int tmp [3];
d = 0;
sa = 0;
sb = sa + N;
sc = sb + N;
while (sc < N*3)
{
/* Copy out. */
tmp [0] = a [sa];
tmp [1] = a [sb];
tmp [2] = a [sc];
/* Shift */
for (i = sc; i > sb + 1; --i)
a [i] = a [i - 1];
for (i = sb + 1; i > sa + 2; --i)
a [i] = a [i - 2];
sa += 3;
sb += 2;
sc++;
/* Copy in. */
a [d++] = tmp [0];
a [d++] = tmp [1];
a [d++] = tmp [2];
}
}
int
main ()
{
int i;
rearrange ();
for (i = 0; i < N*3; ++i)
printf ("%d\n", a [i]);
putchar ('\n');
return 0;
}
Appears to work. shrug
This is the general solution to the problems like yours.
First of all, for each source index you know the destination index. Now, you go like that:
Take the first item. Find its final place. Memorize the item at that place, and store the first item there. Now, find the place where the memorized item belongs to, and put that item there, memorizing that replaced item. Continue the process until it hits the place of the first item (obviously).
If you've replaced all the items, you are finished. If not, take the first non-transferred item and continue repeat the procedure from step 1, starting with that item.
You'll need to mark which items you've transferred already. There are different ways to do it: for example, you can use one bit from the item's storage.
Okay, the solution above is not exactly O(1), as it requires N extra bits. Here is the outline of O(1) solution by place, though less efficient:
Consider the items a1, b1, c1. They need to be located at the first 3 places of the result. So we are doing the following: remembering a1, b1, c1, compacting the array except these three items to the back (so it looks like this: , , , a2, a3, ..., an, b2, b3, ..., bn, c2, c3, ..., cn), and put the items a1, b1, c1 at their places at the beginning. Now, we found the place for the first 3 items, so continue this procedure for a2, b2, c2 and so on.
Edit:
let's consider the time complexity of the outline above. Denote list size 3*n. We need n steps. Each single compactification of the list can be done in one pass, and therefore is O(n). All the other operations inside a step are O(1), so we get altogether n * O(n) = O(n^2) complexity. This is far from the best solution, however, as #yi_H mentions, linear-time solution requires heavy usage of more-or-less advanced mathematics.
I can't find any O(n) algorithm but this is O(n^2) in-place one, I'll move triples to the last each time code is tested by given input, is in C#, may be is buggy, If is so let me know:
int[] a = new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 };
int m = a.Length / 3;
int firstB = a[m];
for (int i = m-1; i > 0; i--)
{
int second = a[3 * m - 3];
int third = a[3 * m - 2];
//a[i + 2 * m] = a[i +2 * m];
a[3 * m - 2] = a[2 * m - 1];
a[3 * m - 3] = a[m - 1];
for (int j = m - 1; j < 2 * m - 1; j++)
{
a[j] = a[j + 1];
}
for (int j = 2 * m - 2; j < 3 * m - 3; j++)
{
a[j] = a[j + 2];
}
a[3 * m - 5] = second;
a[3 * m - 4] = third;
m--;
}
a[1] = firstB;
Here we have x * y numbers:
a_11, a_12, ..., a_1x,
a_21, a_22, ..., a_2x,
...
a_y1, a_y2, ..., a_yx
then the number a_ij has the index i*x + j in an array;
after your program, the new index will be
j * y + i
in your interview
{a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4}
x is 4, and y is 3,
so with the index ``n''
i = (n - (n % 4)) / 4;
j = n % 4;
now you can calculate the new index with i, j, x, y.
Good Luck.
Related
I have to find maximum sum of elements in integer array, according to the rule:
If second (or any consecutive) element is added to the sum - only half of the value is added. To avoid this, you can skip one elememnt.
For example we have an input like this [4, 2, 5, 1, 5]
The result should be 14. In this case, we took elements at 0, 2 and 4 positions (4 + 5 + 5 = 14), and skipped elements at positions 1 and 3
The other example would be input like [3, 1, 10, 6, 3, 10]
In this case the answer should be 26. We took elements at positions 0, 2, 3, 5 and skipped elements at positions 1 and 4. Therefore sum counts as:
3 + 0 + 10 + 6/2 + 0 + 10 = 26
Could anyone please explain me the algorithm to solve this problem? Or at least the direction in which i shoudl try to solve it. Does this task has anything to do with dynamic programming? Or maybe with recursion?
Thanks in advance
A simple optimum solution is to iteratively calculate two sums, both corresponding to a maximum up to the current index i,
the first sum (sum0) assuming current value arr[i] is not used, the second sum (sum1) assuming the current value arr[i] is used.
sum0_new = max (sum0, sum1);
sum1_new = max (sum0 + x, sum1 + x/2);
Complexity: O(N)
Code:
This is a simple C++ code to illustrate the algorithm.
This implementation assumes integer division par 2. Easy to modify if it is not the case.
Output: 14 26
#include <iostream>
#include <vector>
#include <algorithm>
int max_sum (const std::vector<int>& arr) {
int sum0 = 0;
int sum1 = 0;
for (int x: arr) {
int temp = sum0;
sum0 = std::max (sum0, sum1);
sum1 = std::max (temp + x, sum1 + x/2);
}
return std::max (sum0, sum1);
}
int main() {
std::vector<std::vector<int>> examples = {
{4, 2, 5, 1, 5},
{3, 1, 10, 6, 3, 10}
};
for (std::vector<int>& arr: examples) {
int sum = max_sum (arr);
std::cout << sum << '\n';
}
return 0;
}
Given 2 arrays of integers a[] and b[] with the same size of n (1 <= n <= 100) numbered from 1 to n.
(0 <= a[i], b[i] <= 6)
You can swap any a[i] with b[i].
What is the minimum number of swaps needed so that the difference of the sums of array a[] and b[] is minimum ?
Then print out:
The number of swaps
The swapped indexes
The difference of sums of both arrays
Example
n = 6
a[] = { 1, 1, 4, 4, 0, 6 }
b[] = { 6, 3, 1, 1, 6, 1 }
Result
- 2 (The number of swaps)
- 5, 6 (The swapped indexes)
- 0 (The difference of sums of the arrays)
Explanation
If you swap a[5] with b[5] and a[6] with b[6] which requires 2 swaps, arrays a[] and b[] will become:
a[] = {1, 1, 4, 4, 6, 1}
b[] = {6, 3, 1, 1, 0, 6}
Sum of a[] is 1 + 1 + 4 + 4 + 6 + 1 = 17
Sum of b[] is 6 + 3 + 1 + 1 + 0 + 6 = 17
So the difference of the two sums is 0.
Here's an iterative method that saves the differences so far and updates the smallest list of indexes needed to swap to achieve them.
JavaScript code:
function update(obj, d, arr){
if (!obj[d] || obj[d].length > arr.length)
obj[d] = arr;
}
function f(A, B){
let diffs = {0: []};
for (let i=0; i<A.length; i++){
const newDiffs = {};
for (d in diffs){
// Swap
let d1 = Number(d) + B[i] - A[i];
if (diffs.hasOwnProperty(d1) && diffs[d1].length < diffs[d].length + 1)
update(newDiffs, d1, diffs[d1]);
else
update(newDiffs, d1, diffs[d].concat(i+1));
d1 = Number(d) + A[i] - B[i];
if (diffs.hasOwnProperty(d1) && diffs[d1].length < diffs[d].length)
update(newDiffs, d1, diffs[d1]);
else
update(newDiffs, d1, diffs[d]);
}
diffs = newDiffs;
}
console.log(JSON.stringify(diffs) + '\n\n');
let best = Infinity;
let idxs;
for (let d in diffs){
const _d = Math.abs(Number(d));
if (_d < best){
best = _d;
idxs = diffs[d];
}
}
return [best, idxs];
};
var A = [1, 1, 4, 4, 0, 6];
var B = [6, 3, 1, 1, 6, 1];
console.log(JSON.stringify(f(A, B)));
Here's a C++ implementation of mine based on Javascript answer of גלעד ברקן.
Short Explanation:
We maintain a mapping of all differences and their minimum swaps seen so far and try to extend all of the differences seen so far based on new values to get new mapping of such kind. We have 2 choices at each step when considering ith items in A and B, either consider the items as it is or swap the ith items.
Code:
#include <iostream>
#include <climits>
#include <unordered_map>
#include <vector>
using namespace std; // Pardon me for this sin
void update_keeping_existing_minimum(unordered_map<int, vector<int> >& mp, int key, vector<int>& value){
if(mp.find(key) == mp.end() || mp[key].size() > value.size())mp[key] = value;
}
// Prints minimum swaps, indexes of swaps and minimum difference of sums
// Runtime is O(2^size_of_input) = 2^1 + 2^2 .. + 2^n = 2*2^n
// This is a bruteforce implementation.
// We try all possible cases, by expanding our array 1 index at time.
// For each previous difference,
// we use new index value and expand our possible difference outcomes.
// In worst case we may get 2 unique differences never seen before for every index.
void get_minimum_swaps(vector<int>& a, vector<int>& b){
int n = a.size();
unordered_map<int, vector<int> > prv_differences_mp;
prv_differences_mp[0] = {}; // initial state
for(int i = 0 ; i < n ; i++){
unordered_map<int, vector<int> > new_differences_mp;
for (auto& it: prv_differences_mp) {
// possibility 1, we swap and expand previous difference
int d = it.first;
int d1 = d + b[i] - a[i];
if(prv_differences_mp.find(d1) != prv_differences_mp.end() && prv_differences_mp[d1].size() < (prv_differences_mp[d].size() + 1)){
update_keeping_existing_minimum(new_differences_mp, d1, prv_differences_mp[d1]);
} else {
// only place we are modifying the prv map, lets make a copy so that changes don't affect other calculations
vector<int> temp = prv_differences_mp[d];
temp.push_back(i+1);
update_keeping_existing_minimum(new_differences_mp, d1, temp);
}
// possibility 2, we don't swap and expand previous difference
int d2 = d + a[i] - b[i];
if(prv_differences_mp.find(d2) != prv_differences_mp.end() && prv_differences_mp[d2].size() < prv_differences_mp[d].size()){
update_keeping_existing_minimum(new_differences_mp, d2, prv_differences_mp[d2]);
} else {
update_keeping_existing_minimum(new_differences_mp, d2, prv_differences_mp[d]);
}
}
cout<<i<<":index\n";
for(auto& it: prv_differences_mp){
cout<<it.first<<": [ ";
for(auto& item: it.second)cout<<item<<" ";
cout<<"] ; ";
}
cout<<"\n";
prv_differences_mp = new_differences_mp;
}
int best = INT_MAX;
vector<int> min_swap_ans;
for(auto& it: prv_differences_mp){
int _d = it.first >= 0 ? it.first: -it.first;
if(_d < best){
best = _d;
min_swap_ans = it.second;
}
}
cout<<"Number of swaps: "<<min_swap_ans.size()<<"\n";
cout<<"Swapped indexes:\n";
for(auto idx: min_swap_ans)cout<<idx<<" ";
cout<<"\nDifference: "<<best<<"\n";
}
int main(){
vector<int> A{ 1, 1, 4, 4, 0, 6 };
vector<int> B{ 6, 3, 1, 1, 6, 1 };
get_minimum_swaps(A, B);
return 0;
}
I am programming in C. What is the best method (I mean in linear time) to spit array on elements less, equals and greater than some value x.
For example if I have array
{1, 4, 6, 7, 13, 1, 7, 3, 5, 11}
and x = 7 then it should be
{1, 4, 6, 1, 3, 5, 7, 7, 13, 11 }
I don't want to sort elements because I need more efficient way. Of course in this example in could be any permutation of {1, 4, 6, 1, 3, 5} and {13, 11}.
My thougt: less or grater than some element in array... In this example it is 7.
My function is:
int x = 7;
int u =0, z = 0;
for(int i=0; i<size-1; i++) // size - 1 because the last element will be choosen value
{
if(A[i] == x)
swap(A[i], A[u]);
else if(A[i] == x)
{
swap(A[i], A[n-(++z)]);
continue;
}
i++
}
for(int i = 0; i<z; i++)
swap(A[u+i],A[size-(++z)];
where u is number of current less elements, and z is the number of equals element
But if I have every elements in array equals there it doesn't work (size-(++z)) is going under 0
This is the so-called Dutch national flag problem, named after the three-striped Dutch flag. (It was named that by E.W. Dijkstra, who was Dutch.) It's similar to the partition function needed to implement quicksort, but in most explanations of quicksort a two-way partitioning algorithm is presented whereas here we are looking for a three-way partition. The classic quicksort partitioning algorithms divide the vector into two parts, one consisting of elements no greater than the pivot and the other consisting of elements strictly greater. [See note 1]
The wikipedia article gives pseudocode for Dijkstra's solution, which (unlike the classic partition algorithm usually presented in discussions of quicksort) moves left to right through the vector:
void dutchflag(int* v, size_t n, int x) {
for (size_t lo = 0, hi = n, j = 0; j < hi; ) {
if (v[j] < x) {
swap(v, lo, j); ++lo; ++j;
} else if (v[j] > x) {
--hi; swap(v, j, hi);
} else {
++j;
}
}
There is another algorithm, discovered in 1993 by Bentley and McIlroy and published in their paper "Engineering a Sort Function" which has some nice diagrams illustrating how various partitioning functions work, as well as some discussion about why partitioning algorithms matter. The Bentley & McIlroy algorithm is better in the case that the pivot element occurs infrequently in the list while Dijkstra's is better if it appears often, so you have to know something about your data in order to choose between them. I believe that most modern quicksort algorithms use Bentley & McIlroy, because the common case is that the array to be sorted has few duplicates.
Notes
The Hoare algorithm as presented in the Wikipedia Quicksort article, does not rearrange values equal to the pivot, so they can end up being present in both partitions. Consequently, it is not a true partitioning algorithm.
You can do this:
1) Loop through the array, if element is less than x then put in new array1.
2)If element is greater than x then put in new array2.
This is linear time O(n)
I tried something like this below which I think is O(n). Took me a little bit to work the kinks out but I think it's pretty similar to the dutchflag answer above.
My ouptput
a.exe
1 4 6 5 3 1 7 7 11 13
1 4 5 6 3 1 7 7 7 11 13
code:
#define ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0]))
void order(int * list, int size, int orderVal)
{
int firstIdx, lastIdx, currVal, tempVal;
firstIdx = 0;
lastIdx = size-1;
for ( ;lastIdx>firstIdx;firstIdx++)
{
currVal = list[firstIdx];
if (currVal >= orderVal)
{
tempVal = list[lastIdx];
list[lastIdx] = currVal;
lastIdx--;
list[firstIdx] = tempVal;
if (tempVal >= orderVal)
firstIdx--;
}
}
lastIdx = size-1;
for( ;lastIdx>firstIdx && middleNum>0;lastIdx--)
{
currVal = list[lastIdx];
if (currVal == orderVal)
{
tempVal = list[firstIdx];
list[firstIdx] = currVal;
firstIdx++;
list[lastIdx] = tempVal;
if (tempVal == orderVal)
lastIdx++;
}
}
}
int main(int argc, char * argv[])
{
int i;
int list[] = {1, 4, 6, 7, 13, 1, 7, 3, 5, 11};
int list2[] = {1, 4, 7, 6, 7, 13, 1, 7, 3, 5, 11};
order(list, ARRAY_SIZE(list), 7);
for (i=0; i<ARRAY_SIZE(list); i++)
printf("%d ", list[i]);
printf("\n");
order(list2, ARRAY_SIZE(list2), 7);
for (i=0; i<ARRAY_SIZE(list2); i++)
printf("%d ", list2[i]);
}
Here is an example using a bubble sort. Which type of sort algorithm is best, is up to you, this is just to demonstrate. Here, I treat values < x as -1, values == x as 0, values > x as 1.
Note that the elements < x and those > x are still in the same sequence.
#include <stdio.h>
int main(void)
{
int array[] = { 1, 4, 6, 7, 13, 1, 7, 3, 5, 11 };
int x = 7;
int len = sizeof array / sizeof array[0];
int i, j, m, n, tmp;
for (i=0; i<len-1; i++) {
m = array[i] < x ? -1 : array[i] == x ? 0 : 1;
for (j=i+1; j<len; j++) {
n = array[j] < x ? -1 : array[j] == x ? 0 : 1;
if (m > n) {
tmp = array[i]; // swap the array element
array[i] = array[j];
array[j] = tmp;
m = n; // and replace alias
}
}
}
for(i=0; i<len; i++)
printf("%d ", array[i]);
printf("\n");
return 0;
}
Program output:
1 4 6 1 3 5 7 7 13 11
I was asked:
Replace each number in a list by sum of remaining elements, the list is not sorted.
So suppose if we have a list of numbers like {2, 7, 1, 3, 8}, now we are to replace each element with sum of rest of elements. The output should be:
{(7 + 1 + 3 + 8), (2 + 1 + 3 + 8), (2 + 7 + 3 + 8), (2 + 7 + 1 + 8), (2 + 7 + 1 + 3)}
== {19, 14, 20, 18, 13}
I answered an obvious solution:
First evaluate sum of all numbers then subtract each element from sum.
So for above list sum is 2 + 7 + 1 + 3 + 8 = 21, then for output do like:
{sum - 2, sum - 7, sum - 1, sum - 3, sum - 8}
{21 - 2, 21 - 7, 21 - 1, 21 - 3, 21 - 8}
== {19, 14, 20, 18, 13}
It needs only two iterations of list.
Then Interviewer asked me: Now do it without subtraction? and I couldn't answer :(
Is other solution possible? Can some share any other trick? A better trick is possible?
Lets extra memory space can be used (I asked after a few minutes of try, even then I couldn't answer).
One possibility would be to compute prefix and suffix sums of your array and then combine the appropriate entries. This would still be O(n) but needs more memory space so I think your original method is better.
In other words, from {2, 7, 1, 3, 8} compute {2, 2+7, 2+7+1, 2+7+1+3, 2+7+1+3+8} and {2+7+1+3+8, 7+1+3+8, 1+3+8, 3+8, 8} and then add the appropriate entries.
The solution is to sum everything but the element. Then you don't have to subtract after the fact. You just skip adding the element at the current index.
Alternatively, you could get a subset of the list that excludes the element at the current index, then just sum the subset together. Pretty much the same thing as my first suggestion with more implementation detail.
C++ implementation. O(n) and done by keeping sums of all elements before and after a certain index.
#include <iostream>
int main() {
int a[] = {2,7,1,3,8};
int prefix[5]; // Sum of all values before current index
int suffix[5]; // Sum of all values after current index
prefix[0] = 0;
suffix[4] = 0;
for(int i = 1; i < 5; i++) {
prefix[i] = prefix[i-1] + a[i-1];
suffix[4 - i] = suffix[4 - i + 1] + a[4 - i + 1];
}
// Print result
for (int i = 0; i < 5; i++) {
std::cout << prefix[i] + suffix[i] << " ";
}
std::cout << std::endl;
}
I can't think anything better than yours.
But how about this :
Create a (n-1)xn matrix:
[ 2, 7, 1, 3, 8 ]
| 7, 1, 3, 8, 2 | rotate by 1
| 1, 3, 8, 2, 7 | by 2
| 3, 8, 2, 7, 1 | by 3
| 8, 2, 7, 1, 3 | by 4
Then Sum up the columns
C++'s std::rotate_copy can be used to create matrix
std::vector<int> v1 {2, 7, 1, 3, 8 };
std::vector<int> v2 (v1.size());
int i,j;
std::vector< std::vector<int> > mat;
for (int i=1; i<v1.size();++i){
std::rotate_copy(v1.begin(),v1.begin()+i,v1.end(),v2.begin());
mat.push_back(v2);
}
for(j=0;j<v1.size();++j)
for(i=0;i<v1.size()-2;++i)
v2[j]+=mat[i][j];
for(i=0;i<v2.size();++i)
std::cout<<v2[i]<<" ";
#include <iostream.h>
#include <stdio.h>
int main() {
int a[] = {2,7,1,3,8};
int sum[5]={0};
for(int j = 0; j < 5; j++){
for(int i = 1; i < 5; i++) {
sum[j]=sum[j]+a[(j+i+5)%5];
}
printf("%d ", sum[j]); }
}
Instead of subtracting the element you can add the element multiplied by -1. Multiplication and addition are allowed operations, I guess.
Question: Given an unsorted array of positive integers, is it possible to find a pair of integers from that array that sum up to a given sum?
Constraints: This should be done in O(n) and in-place (without any external storage like arrays, hash-maps) (you can use extra variables/pointers)
If this is not possible, can there be a proof given for the same?
If you have a sorted array you can find such a pair in O(n) by moving two pointers toward the middle
i = 0
j = n-1
while(i < j){
if (a[i] + a[j] == target) return (i, j);
else if (a[i] + a[j] < target) i += 1;
else if (a[i] + a[j] > target) j -= 1;
}
return NOT_FOUND;
The sorting can be made O(N) if you have a bound on the size of the numbers (or if the the array is already sorted in the first place). Even then, a log n factor is really small and I don't want to bother to shave it off.
proof:
If there is a solution (i*, j*), suppose, without loss of generality, that i reaches i* before j reaches j*. Since for all j' between j* and j we know that a[j'] > a[j*] we can extrapolate that a[i] + a[j'] > a[i*] + a[j*] = target and, therefore, that all the following steps of the algorithm will cause j to decrease until it reaches j* (or an equal value) without giving i a chance to advance forward and "miss" the solution.
The interpretation in the other direction is similar.
An O(N) time and O(1) space solution that works on a sorted array:
Let M be the value you're after. Use two pointers, X and Y. Start X=0 at the beginning and Y=N-1 at the end. Compute the sum sum = array[X] + array[Y]. If sum > M, then decrement Y, otherwise increment X. If the pointers cross, then no solution exists.
You can sort in place to get this for a general array, but I'm not certain there is an O(N) time and O(1) space solution in general.
My solution in Java (Time Complexity O(n)), this will output all the pairs with a given sum
import java.util.HashMap;
import java.util.Map;
public class Test {
public static void main(String[] args) {
// TODO Auto-generated method stub
Map<Integer, Integer> hash = new HashMap<>();
int arr[] = {1,4,2,6,3,8,2,9};
int sum = 5;
for (int i = 0; i < arr.length; i++) {
hash.put(arr[i],i);
}
for (int i = 0; i < arr.length; i++) {
if(hash.containsKey(sum-arr[i])){
//System.out.println(i+ " " + hash.get(sum-arr[i]));
System.out.println(arr[i]+ " " + (sum-arr[i]));
}
}
}
}
This might be possible if the array contains numbers, the upper limit of which is known to you beforehand. Then use counting sort or radix sort(o(n)) and use the algorithm which #PengOne suggested.
Otherwise
I can't think of O(n) solution.But O(nlgn) solution works in this way:-
First sort the array using merge sort or quick sort(for inplace). Find if sum - array_element is there in this sorted array.
One can use binary search for that.
So total time complexity: O(nlgn) + O(lgn) -> O(nlgn).
AS #PengOne mentioned it's not possible in general scheme of things. But if you make some restrictions on i/p data.
all elements are all + or all -, if not then would need to know range (high, low) and made changes.
K, sum of two integers is sparse compared to elements in general.
It's okay to destroy i/p array A[N].
Step 1: Move all elements less than SUM to the beginning of array, say in N Passes we have divided array into [0,K] & [K, N-1] such that [0,K] contains elements <= SUM.
Step 2: Since we know bounds (0 to SUM) we can use radix sort.
Step 3: Use binary search on A[K], one good thing is that if we need to find complementary element we need only look half of array A[K]. so in A[k] we iterate over A[ 0 to K/2 + 1] we need to do binary search in A[i to K].
So total appx. time is, N + K + K/2 lg (K) where K is number of elements btw 0 to Sum in i/p A[N]
Note: if you use #PengOne's approach you can do step3 in K. So total time would be N+2K which is definitely O(N)
We do not use any additional memory but destroy the i/p array which is also not bad since it didn't had any ordering to begin with.
First off, sort the array using radix sort. That'll set you back O(kN). Then proceed as #PengOne suggest.
The following site gives a simple solution using hashset that sees a number and then searches the hashset for given sum-current number
http://www.dsalgo.com/UnsortedTwoSumToK.php
Here's an O(N) algorithm. It relies on an in-place O(N) duplicate removal algorithm, and the existence of a good hash function for the ints in your array.
First, remove all duplicates from the array.
Second, go through the array, and replace each number x with min(x, S-x) where S is the sum you want to reach.
Third, find if there's any duplicates in the array: if 'x' is duplicated, then 'x' and 'S-x' must have occurred in the original array, and you've found your pair.
Use count sort to sort the array O(n).
take two pointers one starts from 0th index of array, and another from end of array say (n-1).
run the loop until low <= high
Sum = arr[low] + arr[high]
if(sum == target)
print low, high
if(sum < target)
low++
if(sum > target)
high--
Step 2 to 10 takes O(n) time, and counting sort takes O(n). So total time complexity will be O(n).
In javascript : This code when n is greater then the time and number of iterations increase. Number of test done by the program will be equal to ((n*(n/2)+n/2) where n is the number of elements.The given sum number is discarded in if (arr[i] + arr[j] === 0) where 0 could be any number given.
var arr = [-4, -3, 3, 4];
var lengtharr = arr.length;
var i = 0;
var j = 1;
var k = 1;
do {
do {
if (arr[i] + arr[j] === 0) { document.write(' Elements arr [' + i + '] [' + j + '] sum 0'); } else { document.write('____'); }
j++;
} while (j < lengtharr);
k++;
j = k;
i++;
} while (i < (lengtharr-1));
Here is a solution witch takes into account duplicate entries. It is written in javascript and runs using sorted and unsorted arrays. The solution runs in O(n) time.
var count_pairs_unsorted = function(_arr,x) {
// setup variables
var asc_arr = [];
var len = _arr.length;
if(!x) x = 0;
var pairs = 0;
var i = -1;
var k = len-1;
if(len<2) return pairs;
// tally all the like numbers into buckets
while(i<k) {
asc_arr[_arr[i]]=-(~(asc_arr[_arr[i]]));
asc_arr[_arr[k]]=-(~(asc_arr[_arr[k]]));
i++;
k--;
}
// odd amount of elements
if(i==k) {
asc_arr[_arr[k]]=-(~(asc_arr[_arr[k]]));
k--;
}
// count all the pairs reducing tallies as you go
while(i<len||k>-1){
var y;
if(i<len){
y = x-_arr[i];
if(asc_arr[y]!=undefined&&(asc_arr[y]+asc_arr[_arr[i]])>1) {
if(_arr[i]==y) {
var comb = 1;
while(--asc_arr[_arr[i]] > 0) {pairs+=(comb++);}
} else pairs+=asc_arr[_arr[i]]*asc_arr[y];
asc_arr[y] = 0;
asc_arr[_arr[i]] = 0;
}
}
if(k>-1) {
y = x-_arr[k];
if(asc_arr[y]!=undefined&&(asc_arr[y]+asc_arr[_arr[k]])>1) {
if(_arr[k]==y) {
var comb = 1;
while(--asc_arr[_arr[k]] > 0) {pairs+=(comb++);}
} else pairs+=asc_arr[_arr[k]]*asc_arr[y];
asc_arr[y] = 0;
asc_arr[_arr[k]] = 0;
}
}
i++;
k--;
}
return pairs;
}
Start at both side of the array and slowly work your way inwards keeping a count of how many times each number is found. Once you reach the midpoint all numbers are tallied and you can now progress both pointers counting the pairs as you go.
It only counts pairs but can be reworked to
find the pairs
find pairs < x
find pairs > x
Enjoy!
Ruby implementation
ar1 = [ 32, 44, 68, 54, 65, 43, 68, 46, 68, 56]
for i in 0..ar1.length-1
t = 100 - ar1[i]
if ar1.include?(t)
s= ar1.count(t)
if s < 2
print s , " - " , t, " , " , ar1[i] , " pair ", i, "\n"
end
end
end
Here is a solution in python:
a = [9, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 9, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 2, 8, 9, 2, 2, 8,
9, 2, 15, 11, 21, 8, 9, 12, 2, 8, 9, 2, 15, 11, 21, 7, 9, 2, 23, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 12, 9, 2, 15, 11, 21, 8, 9, 2, 2,
8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 7.12, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9,
2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 0.87, 78]
i = 0
j = len(a) - 1
my_sum = 8
finded_numbers = ()
iterations = 0
while(OK):
iterations += 1
if (i < j):
i += 1
if (i == j):
if (i == 0):
OK = False
break
i = 0
j -= 1
if (a[i] + a[j] == my_sum):
finded_numbers = (a[i], a[j])
OK = False
print finded_numbers
print iterations
I was asked this same question during an interview, and this is the scheme I had in mind. There's an improvement left to do, to permit negative numbers, but it would only be necessary to modify the indexes. Space-wise ain't good, but I believe running time here would be O(N)+O(N)+O(subset of N) -> O(N). I may be wrong.
void find_sum(int *array_numbers, int x){
int i, freq, n_numbers;
int array_freq[x+1]= {0}; // x + 1 as there could be 0’s as well
if(array_numbers)
{
n_numbers = (int) sizeof(array_numbers);
for(i=0; i<n_numbers;i++){ array_freq[array_numbers[i]]++; } //O(N)
for(i=0; i<n_numbers;i++)
{ //O(N)
if ((array_freq[x-array_numbers[i]] > 0)&&(array_freq[array_numbers[i]] > 0)&&(array_numbers[i]!=(x/2)))
{
freq = array_freq[x-array_numbers[i]] * array_freq[array_numbers[i]];
printf(“-{%d,%d} %d times\n”,array_numbers[i],x-array_numbers[i],freq );
// “-{3, 7} 6 times” if there’s 3 ‘7’s and 2 ‘3’s
array_freq[array_numbers[i]]=0;
array_freq[x-array_numbers[i]]=0; // doing this we don’t get them repeated
}
} // end loop
if ((x%2)=0)
{
freq = array_freq[x/2];
n_numbers=0;
for(i=1; i<freq;i++)
{ //O([size-k subset])
n_numbers+= (freq-i);
}
printf(“-{%d,%d} %d times\n”,x/2,x/2,n_numbers);
}
return;
}else{
return; // Incoming NULL array
printf(“nothing to do here, bad pointer\n”);
}
}
Critics are welcomed.
In java, this is depends on max number in array.
it returns an int[] having the indexes of two elements.
it is O(N).
public static int[] twoSum(final int[] nums, int target) {
int[] r = new int[2];
r[0] = -1;
r[1] = -1;
int[] vIndex = new int[0Xffff];
for (int i = 0; i < nums.length; i++) {
int delta = 0Xfff;
int gapIndex = target - nums[i] + delta;
if (vIndex[gapIndex] != 0) {
r[0] = vIndex[gapIndex];
r[1] = i + 1;
return r;
} else {
vIndex[nums[i] + delta] = i + 1;
}
}
return r;
}
First you should find reverse array => sum minus actual array
then check whether any element from these new array exist in the actual array.
const arr = [0, 1, 2, 6];
const sum = 8;
let isPairExist = arr
.map(item => sum - item) // [8, 7, 6, 2];
.find((item, index) => {
arr.splice(0, 1); // an element should pair with another element
return arr.find(x => x === item);
})
? true : false;
console.log(isPairExist);
Naïve double loop printout with O(n x n) performance can be improved to linear O(n) performance using O(n) memory for Hash Table as follows:
void TwoIntegersSum(int[] given, int sum)
{
Hashtable ht = new Hashtable();
for (int i = 0; i < given.Length; i++)
if (ht.Contains(sum - given[i]))
Console.WriteLine("{0} + {1}", given[i], sum - given[i]);
else
ht.Add(given[i], sum - given[i]);
Console.Read();
}
def pair_sum(arr,k):
counter = 0
lookup = set()
for num in arr:
if k-num in lookup:
counter+=1
else:
lookup.add(num)
return counter
pass
pair_sum([1,3,2,2],4)
The solution in python
Not guaranteed to be possible; how is the given sum selected?
Example: unsorted array of integers
2, 6, 4, 8, 12, 10
Given sum:
7
??