Algorithm - Find existence of a 2d array in another 2d array - arrays

I came across this question while in an interview and i am unable to find the best way to do it.
The question says, there are two 2d arrays, one is bigger than the other.
Lets say,
Array_1 = [[1,2],
[5,6]]
and
Array_2 = [[1,2,3,4],
[5,6,7,8],
[9,10,11,12]]
Since, here the Array 2 contains Array 1, the algo should return true. Otherwise, false.
The size of the array can be anything.

Try this.
function Test() {
var x = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]];
var y = [[6, 7], [10, 12]];
for (i = 0; i < x.length; i++) {
for (j = 0; j < x[i].length; j++) {
if (x[i][j] == y[0][0])
if (findMatch(x, y, i, j)) {
console.log("Match Found");
return true;
}
}
}
console.log("Not found");
return false;
}
function findMatch(x, y, i, j) {
var b = true;
for (k = i; k < y.length; k++) {
for (n = j; n < y[k].length; n++) {
if (y[k - i][n - j] != x[k][n]) {
b = false;
break;
}
}
}
return b;
}
Note that this doesn't match if the smaller array is rotated inside the big array.(Written in javaScript)

I would fill in the smaller array to the bigger dimensions with null values (or with NaN), convert to 1D and truncate/strip the unnecessary nulls :
array_1 = [1, 2, null, null, 5, 6]
array_2 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
then compare the 1D arrays, while skipping the null values - this would be O(n*m) in the worst case (such as [1,1,1,2] vs [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]), and it would be O(n) in the best case (if every number in the bigger array was different)
Edit: more logic is needed to ensure comparison only within the complete rows of the bigger array, not across rows...
I guess you could convert the arrays to dictionaries of positions and figure out a bit more complicated and faster algorithm if you need to do multiple comparisons...
You could also rotate the smaller array if needed, e.g.:
array_1_270 = [6, 2, null, null, 1, 5]

You can try aho-corasick algorithm for 2 dimension. Aho-corasick algorithm is the fastest multiple pattern matching. Here is a similar question:is there any paper or an explanation on how to implement a two dimensional KMP?

Maybe a little simpler in Python 2.6
def check():
small=[[1,2],[5,6]] #matches upper left corner
smallrows = len(small) #rows = 2
smallcols = len(small[0]) #cols = 2
big=[[1,2,3,4],[5,6,7,8],[9,10,11,12]]
bigrows = len(big) #rows = 3
bigcols = len(big[0]) #cols = 4
for i in range(bigrows-smallrows+1): #i is number row steps
for j in range(bigcols-smallcols+1): #j is number col steps
flag = 0
for k in range(smallrows):
for l in range(smallcols):
if big[i+k][j+l] != small[k][l]:
flag = 1
continue
if flag == 0:
return(True)
return(False)
print check()

Related

Finding how many pairs of numbers are in reach of a distance D in a sequence of integers

Let's have an increasing sequence of distinct non-negative integers {0, 2, 3, 4, 7, 10, 12}. What's the fastest way of telling how many pairs are at most a distance let's say D = 3 from each other?
For example here it would be: [0, 2], [0, 3], [2, 3], [2, 4], [3, 4], [4, 7], [7, 10], [10, 12], so 8.
My shot at this:
int arr[] = {0, 2, 3, 4, 7, 10, 12};
int arrLength = 7;
int k = 1;
int D = 3;
int sum = 0;
for (int i = 0; i < arrLength;) {
if (i + k < arrLength && arr[i + k] - arr[i] <= D) {
sum++;
k++;
}
else {
i++;
k = 1;
}
}
printf("Number of pairs: %d\n", sum);
It takes too much time for larger arrays. Is there any other way of exploiting the fact, that:
The sequence is always increasing.
There can't be two same numbers.
We don't need to print out the exact pairs, just the number of them.
The integers can't be negative.
We could somehow share already computed pairs to other iterations.
The else clause is very pessimistic. You don't need to reset k to 1. It is obvious that a[i+1] forms the pair with all values in the [i+2, i+k) range. Consider a sliding window, along the lines of (untested)
i = 0;
j = 1;
while (i < arrLen) {
while (j < arrlen && a[j] - a[i] <= D) {
j++;
}
i++;
sum += j - i;
if (i == j) {
j++;
}
}
with a linear time complexity.
You can do it by dynamic programming. If M(n) is number of specified pairs, we will have M(n) = M(n-1) + number of pairs of A[0..n-2] with A[n-1] which have less than D. To find the last part of the recursion, it can be computed by log(n) as they are sorted. Hence, the time complexity of this algorithm is T(n) = T(n-1) + log(n) = O(n log(n)).

Google Kick Start Time Limited on Dart

I found out that my code Time Limited, but algorithm is just fine i believe. Is it my code problem or language?
Here is the task: Task src
An arithmetic array is an array that contains at least two integers and the differences between consecutive integers are equal. For example, [9, 10], [3, 3, 3], and [9, 7, 5, 3] are arithmetic arrays, while [1, 3, 3, 7], [2, 1, 2], and [1, 2, 4] are not arithmetic arrays.
Sarasvati has an array of N non-negative integers. The i-th integer of the array is Ai. She wants to choose a contiguous arithmetic subarray from her array that has the maximum length. Please help her to determine the length of the longest contiguous arithmetic subarray.
import 'dart:io';
import 'dart:math';
foo (array){
var diff = [];
var size = 2;
var answ = 2;
for (int i = 0; i < array.length - 1; i++){
diff.add(array[i + 1] - array[i]);
}
for (int i = 1; i < diff.length; i++){
if (diff[i] == diff[i - 1]){
size++;
}
else{
size = 2;
}
answ = max(answ, size);
}
return answ;
}
void main() {
var N = int.parse(stdin.readLineSync());
for (int i = 0; i < N; i++){
var M = int.parse(stdin.readLineSync());
String s = stdin.readLineSync();
var intArray = [];
s.trim().split(" ").forEach((x) => intArray.add(int.parse(x)));
print("Case #${i + 1}: ${foo(intArray)}");
}
}

Divide array into sub arrays such that no sub array contains duplicate elements

I have an array of 32 numbers [1,2,3,4,4,4,4,5,5,5,5,5,6,6,7,7,7,7,8,9,10,10,11,12,13,13,14,14,15,16,17,17]
I want to partition this array into 8 sub arrays each of size 4 such that no sub array has duplicate elements.
In how many ways can I do this? What is most optimal solution to generate all permutations and a single random permutation. Order of sub arrays do not matter. Neither does order of elements inside each sub array.
For my original problem I do not need to generate all permutations. I just have to generate a random permutation every time my program is run.
My approach was to randomly shuffle array using Fisher–Yates algorithm and keep reshuffling it until I get all 8 sub arrays with no duplicate elements. Of course this is not the best approach.
As part of my solution I shuffle array and from this shuffled array start adding elements one by one to sub arrays. If any sub array had a number already then I keep skipping elements from my shuffled array until I reach a number which is not is my sub arrays. This approach fails for some cases.
Pseudocode of what I have tried
let shuffledArray = shuffle(originalArray);
let subArrays = [];
for (let i = 0; i < 8; i++) {
subArrays[i] = [];
for (let j = 0; j < 32; j++) {
if (!subArrays[i].contains(shuffledArray[j]) && !shuffledArray[j].used) {
subArrays[i].push(shuffledArray[j])
shuffledArray[j].used = true;
}
if (subArrays[i].length == 4) {
break;
}
}
}
if subArrays has any sub array such that it has duplicate elements then repeat above steps
else we have generated a random permutation
As you can see above approach fails when after shuffling all duplicate numbers lie at the end so as a hack I repeat all steps again and again till I get result.
I am using JavaScript but answers in any language are welcome as long as they can be converted into JavaScript.
Also it would be great if anyone can provide general solution for N elements and K number of groups.
This is my first question at SO. Feel free to edit/suggest improvements.
An option is to first break up your list into groups of identical numbers, then sort by length. Then you can make groups by taking elements from each group starting at the longest, second-longest, third-longest, fourth-longest. When you empty a subgroup, remove it.
Here's JS implementation:
function partition(arr, N){
// group items together and sort by length
// groups will be [[5, 5, 5, 5, 5], [4, 4, 4, 4], ...]
let groups = Object.values(l.reduce((obj, n) =>{
(obj[n] || (obj[n] = [])).push(n)
return obj
}, {})).sort((a,b) => b.length - a.length)
let res = []
while(groups.length >= N){
let group = []
let i = 0
while (group.length < N){
group.push(groups[i].pop())
if (groups[i].length < 1) groups.splice(i, 1)
else i++
}
res.push(group)
}
return res
}
let l = [1,2,3,4,4,4,4,5,5,5,5,5,6,6,7,7,7,7,8,9,10,10,11,12,13,13,14,14,15,16,17,17]
console.log(partition(l, 4).map(arr => arr.join(',')))
// with 5
console.log(partition(l, 5).map(arr => arr.join(',')))
You can use bitmasking for this problem. Start by generating all 17-bit numbers which have exactly 4 bits set to 1. These numbers will represent the possible elements in one group in a way that if the i'th bit of the number is set, i+1 is part of that group.
Now, out of these generated numbers, your task is just to select 8 numbers repeatedly satisfying the frequency constraints of each element which can be done easily.
I'll get back to you if I find some other approach.
EDIT: Alternatively, you can use recursion in following way: start with 8 numbers, all initially set to 0, start by setting (a[i]-1)'th bit to 1 in exactly one of those numbers which that bit set to 0 and total set bits in that number are less than 4.
When you reach the leaf in recursion, you'll have 8 numbers representing the bitmasks as described above. You can use them for partition.
You can use this approach by creating let's say 100 sets of 8 numbers initially and return from the recursion. Once all these 100 are utilized, you can run this recursion again to create double the sets formed in the previous step and so on.
#include<bits/stdc++.h>
using namespace std;
int num=0;
vector<vector<int> > sets;
void recur(int* arr, vector<int>& masks, int i) {
if(num == 0)
return;
if(i==32){
vector<int> newSet;
for(int j=0; j<8; j++)
newSet.push_back(masks[j]);
sort(newSet.begin(), newSet.end());
int flag=0;
for(int j=0; j<sets.size(); j++){
flag=1;
for(int k=0; k<8; k++)
flag = flag && (newSet[k] == sets[j][k]);
if(flag) break;
}
if(!flag){
sets.push_back(newSet);
num--;
}
return;
}
for(int ii=0; ii<8; ii++) {
if(__builtin_popcount(masks[ii]) < 4 && (masks[ii] & (1 << (arr[i]-1))) == 0){
masks[ii] = masks[ii] ^ (1<<(arr[i] - 1));
recur(arr, masks, i+1);
masks[ii] = masks[ii] ^ (1<<(arr[i] - 1));
}
}
}
int main() {
num = 100;
int num1 = num;
vector<int> masks;
for(int i=0; i<8; i++)
masks.push_back(0);
int arr[] = {1,2,3,15,16,4,4,4,4,5,5,5,5,5,6,6,7,7,7,7,8,9,10,10,11,12,13,13,14,14,17,17};
recur(arr, masks, 0);
for(int j=0; j<num1; j++){
for(int i=0; i<8; i++){
//ith group
printf("%d group : ",i);
for(int k=0; k<17; k++){
if(sets[j][i] & (1<<k))
printf("%d ",k+1);
}
printf("\n");
}
printf("\n\n\n======\n\n\n");
}
return 0;
}
Is this what you are looking for?
Here's a demonstration of enumerating all possibilities of a set (not a multiset as in your example), just to show how rapidly the number of combinations increases. The number of combinations for a partition of 8 4-element parts will be enormous. I'm not sure, but you may be able to adapt some of these ideas to incorporate the multiset or at least first conduct a partial enumeration and then add on the repeated elements randomly.
function f(ns, subs){
if (ns.length != subs.reduce((a,b) => a+b))
throw new Error('Subset cardinality mismatch');
function g(i, _subs){
if (i == ns.length)
return [_subs];
let res = [];
const cardinalities = new Set();
function h(j){
let temp = _subs.map(x => x.slice());
temp[j].push(ns[i]);
res = res.concat(g(i + 1, temp));
}
for (let j=0; j<subs.length; j++){
if (!_subs[j].length && !cardinalities.has(subs[j])){
h(j);
cardinalities.add(subs[j]);
} else if (_subs[j].length && _subs[j].length < subs[j]){
h(j);
}
}
return res;
}
let _subs = [];
subs.map(_ => _subs.push([]));
return g(0, _subs);
}
// https://oeis.org/A025035
let N = 12;
let K = 3;
for (let n=K; n<=N; n+=K){
let a = [];
for (let i=0; i<n; i++)
a.push(i);
let b = [];
for (let i=0; i<n/K; i++)
b.push(K);
console.log(`\n${ JSON.stringify(a) }, ${ JSON.stringify(b) }`);
let result = f(a, b);
console.log(`${ result.length } combinations`);
if (n < 7){
let str = '';
for (let i of result)
str += '\n' + JSON.stringify(i);
console.log(str);
}
console.log('------');
}
The following python code uses a simple method for producing a random partitioning each time it is run. It shuffles the list of 32 integers (to give a random result) then uses a first-fit + backtracking method to find the first arrangement that results from that shuffle. Efficiency: The Fisher-Yates shuffle used here is an O(n) algorithm. Finding the first arrangement from a shuffle can be close to O(n) or can be far worse, depending on the original numbers and on the shuffle, as noted below.
Caveats: Ideally, having a different shuffle should lead to a different partition. But that cannot be, because there are so many more different shuffles than different partitions (perhaps 1020 times as many shuffles vs partitions). Also ideally, every possible partition should have equal probability of being produced. I don't know if that's the case here, and don't even know whether this method covers all possible partitions. For example, it's conceivable that some partitions cannot be generated by a first-fit + backtracking method.
While this method generates the vast majority of its solutions quite quickly -- eg under a millisecond -- it sometimes gets bogged down and wastes a lot of time due to conflicts occurring early in the recursion that aren't detected until several layers deeper. For example, times for finding four sets of 1000 different solutions each were 96 s, 166 s, 125 s, and 307 s, while times for finding sets of 100 different solutions included 56 ms, 78 ms, 1.7 s, 5 s, 50 s.
Some program notes: In the shuffled list s we keep 2mn-k instead of k. Working with the data as bit masks instead of as counting numbers speeds up tests for duplicates. Exponent mn-k (in 2mn-k) lets array u sort so that output is in ascending order. In python, # introduces comments. Brackets with a for expression inside represent a "list comprehension", a way of representing a list that can be generated using a for statement. The expression [0]*nc denotes a list or array of nc elements, initially all 0.
from random import randint
A = [1,2,3,4,4,4,4,5,5,5,5,5,6,6,7,7,7,7,8,
9,10,10,11,12,13,13,14,14,15,16,17,17] # Original number list
ns = len(A) # Number of numbers
nc = 8 # Number of cells
mc = 4 # Max cell occupancy
rc = range(nc) # [0,1,...nc-1]
mn = max(A) # Max number
s = [ 2**(mn-k) for k in A]
for i in range(ns-1,0,-1):
j = randint(0,i)
s[i], s[j] = s[j], s[i] # Do a shuffle exchange
# Create tracking arrays: n for cell count, u for used numbers.
n = [0]*nc
u = [0]*nc
def descend(level):
if level == ns:
return True
v = s[level] # The number we are trying to place
for i in rc:
if (v & u[i] == 0) and n[i] < mc:
u[i] |= v # Put v into cell i
n[i] += 1
if descend(level+1):
return True # Got solution, go up and out
else:
u[i] ^= v # Remove v from cell i
n[i] -= 1
return False # Failed at this level, so backtrack
if descend(0):
for v in sorted(u, reverse=True):
c = [ mn-k for k in range(mn+1) if v & 2**k]
print (sorted(c))
else:
print ('Failed')
Some example output:
[1, 2, 5, 9]
[3, 4, 6, 14]
[4, 5, 6, 10]
[4, 5, 7, 17]
[4, 10, 15, 16]
[5, 7, 8, 17]
[5, 7, 11, 13]
[7, 12, 13, 14]
[1, 4, 7, 13]
[2, 5, 7, 8]
[3, 4, 5, 17]
[4, 5, 6, 14]
[4, 6, 7, 9]
[5, 10, 11, 13]
[5, 10, 12, 16]
[7, 14, 15, 17]

find pair of numbers in array that add to given sum

Question: Given an unsorted array of positive integers, is it possible to find a pair of integers from that array that sum up to a given sum?
Constraints: This should be done in O(n) and in-place (without any external storage like arrays, hash-maps) (you can use extra variables/pointers)
If this is not possible, can there be a proof given for the same?
If you have a sorted array you can find such a pair in O(n) by moving two pointers toward the middle
i = 0
j = n-1
while(i < j){
if (a[i] + a[j] == target) return (i, j);
else if (a[i] + a[j] < target) i += 1;
else if (a[i] + a[j] > target) j -= 1;
}
return NOT_FOUND;
The sorting can be made O(N) if you have a bound on the size of the numbers (or if the the array is already sorted in the first place). Even then, a log n factor is really small and I don't want to bother to shave it off.
proof:
If there is a solution (i*, j*), suppose, without loss of generality, that i reaches i* before j reaches j*. Since for all j' between j* and j we know that a[j'] > a[j*] we can extrapolate that a[i] + a[j'] > a[i*] + a[j*] = target and, therefore, that all the following steps of the algorithm will cause j to decrease until it reaches j* (or an equal value) without giving i a chance to advance forward and "miss" the solution.
The interpretation in the other direction is similar.
An O(N) time and O(1) space solution that works on a sorted array:
Let M be the value you're after. Use two pointers, X and Y. Start X=0 at the beginning and Y=N-1 at the end. Compute the sum sum = array[X] + array[Y]. If sum > M, then decrement Y, otherwise increment X. If the pointers cross, then no solution exists.
You can sort in place to get this for a general array, but I'm not certain there is an O(N) time and O(1) space solution in general.
My solution in Java (Time Complexity O(n)), this will output all the pairs with a given sum
import java.util.HashMap;
import java.util.Map;
public class Test {
public static void main(String[] args) {
// TODO Auto-generated method stub
Map<Integer, Integer> hash = new HashMap<>();
int arr[] = {1,4,2,6,3,8,2,9};
int sum = 5;
for (int i = 0; i < arr.length; i++) {
hash.put(arr[i],i);
}
for (int i = 0; i < arr.length; i++) {
if(hash.containsKey(sum-arr[i])){
//System.out.println(i+ " " + hash.get(sum-arr[i]));
System.out.println(arr[i]+ " " + (sum-arr[i]));
}
}
}
}
This might be possible if the array contains numbers, the upper limit of which is known to you beforehand. Then use counting sort or radix sort(o(n)) and use the algorithm which #PengOne suggested.
Otherwise
I can't think of O(n) solution.But O(nlgn) solution works in this way:-
First sort the array using merge sort or quick sort(for inplace). Find if sum - array_element is there in this sorted array.
One can use binary search for that.
So total time complexity: O(nlgn) + O(lgn) -> O(nlgn).
AS #PengOne mentioned it's not possible in general scheme of things. But if you make some restrictions on i/p data.
all elements are all + or all -, if not then would need to know range (high, low) and made changes.
K, sum of two integers is sparse compared to elements in general.
It's okay to destroy i/p array A[N].
Step 1: Move all elements less than SUM to the beginning of array, say in N Passes we have divided array into [0,K] & [K, N-1] such that [0,K] contains elements <= SUM.
Step 2: Since we know bounds (0 to SUM) we can use radix sort.
Step 3: Use binary search on A[K], one good thing is that if we need to find complementary element we need only look half of array A[K]. so in A[k] we iterate over A[ 0 to K/2 + 1] we need to do binary search in A[i to K].
So total appx. time is, N + K + K/2 lg (K) where K is number of elements btw 0 to Sum in i/p A[N]
Note: if you use #PengOne's approach you can do step3 in K. So total time would be N+2K which is definitely O(N)
We do not use any additional memory but destroy the i/p array which is also not bad since it didn't had any ordering to begin with.
First off, sort the array using radix sort. That'll set you back O(kN). Then proceed as #PengOne suggest.
The following site gives a simple solution using hashset that sees a number and then searches the hashset for given sum-current number
http://www.dsalgo.com/UnsortedTwoSumToK.php
Here's an O(N) algorithm. It relies on an in-place O(N) duplicate removal algorithm, and the existence of a good hash function for the ints in your array.
First, remove all duplicates from the array.
Second, go through the array, and replace each number x with min(x, S-x) where S is the sum you want to reach.
Third, find if there's any duplicates in the array: if 'x' is duplicated, then 'x' and 'S-x' must have occurred in the original array, and you've found your pair.
Use count sort to sort the array O(n).
take two pointers one starts from 0th index of array, and another from end of array say (n-1).
run the loop until low <= high
Sum = arr[low] + arr[high]
if(sum == target)
print low, high
if(sum < target)
low++
if(sum > target)
high--
Step 2 to 10 takes O(n) time, and counting sort takes O(n). So total time complexity will be O(n).
In javascript : This code when n is greater then the time and number of iterations increase. Number of test done by the program will be equal to ((n*(n/2)+n/2) where n is the number of elements.The given sum number is discarded in if (arr[i] + arr[j] === 0) where 0 could be any number given.
var arr = [-4, -3, 3, 4];
var lengtharr = arr.length;
var i = 0;
var j = 1;
var k = 1;
do {
do {
if (arr[i] + arr[j] === 0) { document.write(' Elements arr [' + i + '] [' + j + '] sum 0'); } else { document.write('____'); }
j++;
} while (j < lengtharr);
k++;
j = k;
i++;
} while (i < (lengtharr-1));
Here is a solution witch takes into account duplicate entries. It is written in javascript and runs using sorted and unsorted arrays. The solution runs in O(n) time.
var count_pairs_unsorted = function(_arr,x) {
// setup variables
var asc_arr = [];
var len = _arr.length;
if(!x) x = 0;
var pairs = 0;
var i = -1;
var k = len-1;
if(len<2) return pairs;
// tally all the like numbers into buckets
while(i<k) {
asc_arr[_arr[i]]=-(~(asc_arr[_arr[i]]));
asc_arr[_arr[k]]=-(~(asc_arr[_arr[k]]));
i++;
k--;
}
// odd amount of elements
if(i==k) {
asc_arr[_arr[k]]=-(~(asc_arr[_arr[k]]));
k--;
}
// count all the pairs reducing tallies as you go
while(i<len||k>-1){
var y;
if(i<len){
y = x-_arr[i];
if(asc_arr[y]!=undefined&&(asc_arr[y]+asc_arr[_arr[i]])>1) {
if(_arr[i]==y) {
var comb = 1;
while(--asc_arr[_arr[i]] > 0) {pairs+=(comb++);}
} else pairs+=asc_arr[_arr[i]]*asc_arr[y];
asc_arr[y] = 0;
asc_arr[_arr[i]] = 0;
}
}
if(k>-1) {
y = x-_arr[k];
if(asc_arr[y]!=undefined&&(asc_arr[y]+asc_arr[_arr[k]])>1) {
if(_arr[k]==y) {
var comb = 1;
while(--asc_arr[_arr[k]] > 0) {pairs+=(comb++);}
} else pairs+=asc_arr[_arr[k]]*asc_arr[y];
asc_arr[y] = 0;
asc_arr[_arr[k]] = 0;
}
}
i++;
k--;
}
return pairs;
}
Start at both side of the array and slowly work your way inwards keeping a count of how many times each number is found. Once you reach the midpoint all numbers are tallied and you can now progress both pointers counting the pairs as you go.
It only counts pairs but can be reworked to
find the pairs
find pairs < x
find pairs > x
Enjoy!
Ruby implementation
ar1 = [ 32, 44, 68, 54, 65, 43, 68, 46, 68, 56]
for i in 0..ar1.length-1
t = 100 - ar1[i]
if ar1.include?(t)
s= ar1.count(t)
if s < 2
print s , " - " , t, " , " , ar1[i] , " pair ", i, "\n"
end
end
end
Here is a solution in python:
a = [9, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 9, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 2, 8, 9, 2, 2, 8,
9, 2, 15, 11, 21, 8, 9, 12, 2, 8, 9, 2, 15, 11, 21, 7, 9, 2, 23, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 12, 9, 2, 15, 11, 21, 8, 9, 2, 2,
8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 7.12, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9,
2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 9, 2, 2, 8, 9, 2, 15, 11, 21, 8, 0.87, 78]
i = 0
j = len(a) - 1
my_sum = 8
finded_numbers = ()
iterations = 0
while(OK):
iterations += 1
if (i < j):
i += 1
if (i == j):
if (i == 0):
OK = False
break
i = 0
j -= 1
if (a[i] + a[j] == my_sum):
finded_numbers = (a[i], a[j])
OK = False
print finded_numbers
print iterations
I was asked this same question during an interview, and this is the scheme I had in mind. There's an improvement left to do, to permit negative numbers, but it would only be necessary to modify the indexes. Space-wise ain't good, but I believe running time here would be O(N)+O(N)+O(subset of N) -> O(N). I may be wrong.
void find_sum(int *array_numbers, int x){
int i, freq, n_numbers;
int array_freq[x+1]= {0}; // x + 1 as there could be 0’s as well
if(array_numbers)
{
n_numbers = (int) sizeof(array_numbers);
for(i=0; i<n_numbers;i++){ array_freq[array_numbers[i]]++; } //O(N)
for(i=0; i<n_numbers;i++)
{ //O(N)
if ((array_freq[x-array_numbers[i]] > 0)&&(array_freq[array_numbers[i]] > 0)&&(array_numbers[i]!=(x/2)))
{
freq = array_freq[x-array_numbers[i]] * array_freq[array_numbers[i]];
printf(“-{%d,%d} %d times\n”,array_numbers[i],x-array_numbers[i],freq );
// “-{3, 7} 6 times” if there’s 3 ‘7’s and 2 ‘3’s
array_freq[array_numbers[i]]=0;
array_freq[x-array_numbers[i]]=0; // doing this we don’t get them repeated
}
} // end loop
if ((x%2)=0)
{
freq = array_freq[x/2];
n_numbers=0;
for(i=1; i<freq;i++)
{ //O([size-k subset])
n_numbers+= (freq-i);
}
printf(“-{%d,%d} %d times\n”,x/2,x/2,n_numbers);
}
return;
}else{
return; // Incoming NULL array
printf(“nothing to do here, bad pointer\n”);
}
}
Critics are welcomed.
In java, this is depends on max number in array.
it returns an int[] having the indexes of two elements.
it is O(N).
public static int[] twoSum(final int[] nums, int target) {
int[] r = new int[2];
r[0] = -1;
r[1] = -1;
int[] vIndex = new int[0Xffff];
for (int i = 0; i < nums.length; i++) {
int delta = 0Xfff;
int gapIndex = target - nums[i] + delta;
if (vIndex[gapIndex] != 0) {
r[0] = vIndex[gapIndex];
r[1] = i + 1;
return r;
} else {
vIndex[nums[i] + delta] = i + 1;
}
}
return r;
}
First you should find reverse array => sum minus actual array
then check whether any element from these new array exist in the actual array.
const arr = [0, 1, 2, 6];
const sum = 8;
let isPairExist = arr
.map(item => sum - item) // [8, 7, 6, 2];
.find((item, index) => {
arr.splice(0, 1); // an element should pair with another element
return arr.find(x => x === item);
})
? true : false;
console.log(isPairExist);
Naïve double loop printout with O(n x n) performance can be improved to linear O(n) performance using O(n) memory for Hash Table as follows:
void TwoIntegersSum(int[] given, int sum)
{
Hashtable ht = new Hashtable();
for (int i = 0; i < given.Length; i++)
if (ht.Contains(sum - given[i]))
Console.WriteLine("{0} + {1}", given[i], sum - given[i]);
else
ht.Add(given[i], sum - given[i]);
Console.Read();
}
def pair_sum(arr,k):
counter = 0
lookup = set()
for num in arr:
if k-num in lookup:
counter+=1
else:
lookup.add(num)
return counter
pass
pair_sum([1,3,2,2],4)
The solution in python
Not guaranteed to be possible; how is the given sum selected?
Example: unsorted array of integers
2, 6, 4, 8, 12, 10
Given sum:
7
??

Finding common elements in two arrays of different size

I have a problem to find common elements in two arrays and that's of different size.
Take , Array A1 of size n and Array A2 of size m, and m != n
So far, I've tried to iterate lists one by one and copy elements to another list. If the element already contains mark it, but I know it's not a good solution.
Sort the arrays. Then iterate through them with two pointers, always advancing the one pointing to the smaller value. When they point to equal values, you have a common value. This will be O(n log n+m log m) where n and m are the sizes of the two lists. It's just like a merge in merge sort, but where you only produce output when the values being pointed to are equal.
def common_elements(a, b):
a.sort()
b.sort()
i, j = 0, 0
common = []
while i < len(a) and j < len(b):
if a[i] == b[j]:
common.append(a[i])
i += 1
j += 1
elif a[i] < b[j]:
i += 1
else:
j += 1
return common
print 'Common values:', ', '.join(map(str, common_elements([1, 2, 4, 8], [1, 4, 9])))
outputs
Common values: 1, 4
If the elements aren't comparable, throw the elements from one list into a hashmap and check the elements in the second list against the hashmap.
If you want to make it efficient I would convert the smaller array into a hashset and then iterate the larger array and check whether the current element was contained in the hashset. The hash function is efficient compared to sorting arrays. Sorting arrays is expensive.
Here's my sample code
import java.util.*;
public class CountTest {
public static void main(String... args) {
Integer[] array1 = {9, 4, 6, 2, 10, 10};
Integer[] array2 = {14, 3, 6, 9, 10, 15, 17, 9};
Set hashSet = new HashSet(Arrays.asList(array1));
Set commonElements = new HashSet();
for (int i = 0; i < array2.length; i++) {
if (hashSet.contains(array2[i])) {
commonElements.add(array2[i]);
}
}
System.out.println("Common elements " + commonElements);
}
}
Output:
Common elements [6, 9, 10]
In APL:
∪A1∩A2
example:
A1←9, 4, 6, 2, 10, 10
A1
9 4 6 2 10 10
A2←14, 3, 6, 9, 10, 15, 17, 9
A2
14 3 6 9 10 15 17 9
A1∩A2
9 6 10 10
∪A1∩A2
9 6 10
Throw your A2 array into a HashSet, then iterate through A1; if the current element is in the set, it's a common element. This takes O(m + n) time and O(min(m, n)) space.
Looks like nested loops:
commons = empty
for each element a1 in A1
for each element a2 in A2
if a1 == a2
commons.add(a1)
Schouldn't matter at all if the arrays have the same size.
Depending on the language and framework used, set operations might come in handy.
Try heapifying both arrays followed by a merge to find the intersection.
Java example:
public static <E extends Comparable<E>>List<E> intersection(Collection<E> c1,
Collection<E> c2) {
List<E> result = new ArrayList<E>();
PriorityQueue<E> q1 = new PriorityQueue<E>(c1),
q2 = new PriorityQueue<E>(c2);
while (! (q1.isEmpty() || q2.isEmpty())) {
E e1 = q1.peek(), e2 = q2.peek();
int c = e1.compareTo(e2);
if (c == 0) result.add(e1);
if (c <= 0) q1.remove();
if (c >= 0) q2.remove();
}
return result;
}
See this question for more examples of merging.
The Complexity of what I give is O(N*M + N).
Also note that it is Pseudocode C And that it provides distinct values.
eg.[1,1,1,2,2,4] and [1,1,1,2,2,2,5] Will return [1,2]
The Complexity is
N*M cause of the for loops
+ N cause of the checking if it already exists in the ArrayCommon[] (which is n size in case Array2[] contains data which duplicate Part of the Array1[] Assuming N is the size of the smaller Array (N < M).
int Array1[m] = { Whatever };
int Array2[n] = { Whatever };
int ArrayCommon[n] = { };
void AddToCommon(int data)
{
//How many commons we got so far?
static int pos = 0;
bool found = false;
for(int i = 0 ; i <= pos ; i++)
{
//Already found it?
if(ArrayCommon[i] == data)
{
found = true;
}
}
if(!found)
{
//Add it
ArrayCommon[pos] = data;
pos++;
}
}
for(int i = 0 ; i < m ; i++)
{
for(int j = 0 ; j < n ; j++)
{
//Found a Common Element!
if(Array1[i] == Array2[j])
AddToCommon(Array1[i]);
}
}
In Python, you would write set(A1).intersection(A2). This is the optimal O(n + m).
There's ambiguity in your question though. What's the result of A1=[0, 0], A2=[0, 0, 0]? There's reasonable interpretations of your question that give 1, 2, 3, or 6 results in the final array - which does your situation require?
I solve the problem by using Set intersection. It is very elegant. Even though I did not analyze the time complexity, it is probably in reasonable range.
public Set FindCommonElements(Integer[] first, Integer[] second)
{
Set<Integer> set1=new HashSet<Integer>(Arrays.asList(first));
Set<Integer> set2=new HashSet<Integer>(Arrays.asList(second));
// finds intersecting elements in two sets
set1.retainAll(set2);
return set1;
}
class SortedArr
def findCommon(a,b)
j =0
i =0
l1=a.length
l2=b.length
if(l1 > l2)
len=l1
else
len=l2
end
while i < len
if a[i].to_i > b[j].to_i
j +=1
elsif a[i].to_i < b[j].to_i
i +=1
else
puts a[i] # OR store it in other ds
i +=1
j +=1
end
end
end
end
t = SortedArr.new
t.findCommon([1,2,3,4,6,9,11,15],[1,2,3,4,5,12,15])

Resources