Comparing Arrays of Arrays - arrays

So I have two arrays, a and b of varying size containing child arrays of the same length and both are of the same type as are the child arrays (float for example).
I want find all the matches for the child arrays in b within the child arrays of array a.
Now I'm looking for a faster or better way to do this (perhaps CUDA or SIMD coding).
At the moment I have something like (F#):
let mutable result = 0.0
for a in arrayA do:
for b in arrayB do:
if a = b then
result <- result + (a |> Array.sum)
My array a contains around 5 Million elements and array b contains around 3000. Hence my performance related issue.

You use bruteforce algorithm to solve the problem. Suppose that A and B have sizes N and M respecively, each small array you are checking for equality is K elements long. Your algorithm takes O(N M K) time in worst case and O(N M + Z K) in best case, given that number of matches is Z (which may attain N M).
Notice that each of your small arrays is essentially a string. You have two sets of strings, and you want to detect all equal pairs between them.
This problem can be solved with hash table. Create a hash table with O(M) cells. In this table, store strings of array B without duplication. After you have added all the strings from B, iterate over strings from A and check if they are present in the hash table. This solution can be implemented as randomized one with O((M + N) K) time complexity on average, which is linear of the input data size.
Also, you can solve the problem in non-randomized way. Put all the strings into a single array X and sort them. During sorting, put strings from A after all equal strings from B. Note that you should remember which strings of X came from which array. You can either use some fast comparison sort, or use radix sort. In the latter case sorting is done in linear time, i.e. in O((M + N) K).
Now all the common strings are stored in X contiguously. You can iterate over X, maintaining the set of strings from B equal to the currently processed string. If you see a string different from the previous one, clear the set. If the string is from B, add it to the set. If it is from A, record that it is equal to the set of elements from B. This is a single pass over X with O(K) time per string, so it takes O((M + N) K) time in total.
If length K of your strings is not tiny, you can add vectorization to string operations. In case of hash table approach, most time would be spent on computing string hash. If you choose polynomial hash modulo 2^32, then it is easy to vectorize it with SSE2. Also, you need fast string comparison, which can be done with memcmp function, which can be easily vectorized too. For the sorting solution, you need only string comparisons. Also, you might want to implement a radix sort, which is not possible to vectorize, I'm afraid.
Efficient parallelization of both approaches is not very simple. For the first algorithm, you need a concurrent hash table. Actually, there are even some lock-free hash tables out there. For the second approach, you can parallelize the first step (quicksort is easy to parallelize, radix sort is not). The second step can be parallelized too if there are not too many equal strings: you can split the array X into almost equal pieces, breaking it only between two different strings.

You may save some time comparing large arrays by splitting them into smaller arrays and doing the equality check in parallel.
This chunk function is taken directly from F# Snippets
let chunk chunkSize (arr : _ array) =
query {
for idx in 0..(arr.Length - 1) do
groupBy (idx / chunkSize) into g
select (g |> Seq.map (fun idx -> arr.[idx]))
}
Then going something like this to compare arrays. I have chosen to split each array into 4 smaller chunks:
let fastArrayCompare a1 a2 = async {
let! a =
Seq.zip (chunk 4 a1) (chunk 4 a2)
|> Seq.map (fun (a1',a2') -> async {return a1' = a2'})
|> Async.Parallel
return Array.TrueForAll (a,(fun t -> t))}
Obviously you now adding some extra time with the array splitting but with lots of very large array comparisons you should make up this time and then some.

Related

given 3 arrays check if there is any common number

**I have 3 arrays a[1...n] b[1...n] c[1....n] which contain integers.
It is not mentioned if the arrays are sorted or if each array has or has not duplicates.
The task is to check if there is any common number in the given arrays and return true or false.
For example : these arrays a=[3,1,5,10] b=[4,2,6,1] c=[5,3,1,7] have one common number : 1
I need to write an algorithm with time complexity O(n^2).
I let the current element traversed in a[] be x, in b[] be y and in c[] be z and have following cases inside the loop : If x, y and z are same, I can simply return true and stop the program,something like:
for(x=1;x<=n;x++)
for(y=1;y<=n;y++)
for(z=1;z<=n;z++)
if(a[x]==b[y]==c[z])
return true
But this algorithm has time complexity O(n^3) and I need O(n^2).Any suggestions?
There is a pretty simple and efficient solution for this.
Sort a and b. Complexity = O(NlogN)
For each element in c, use binary search to check if it exists in both a and b. Complexity = O(NlogN).
That'll give you a total complexity of O(NlogN), better than O(N^2).
Create a new array, and save common elements in a and b arrays. Then find common elements in this array with c array.
python solution
def find_d(a, b, c):
for i in a:
for j in b:
if i==j:
d.append(i)
def findAllCommon(c, d):
for i in c:
for j in d:
if i==j:
e.append(i)
break
a = [3,1,5,10]
b = [4,2,6,1]
c = [5,3,1,7]
d = []
e = []
find_d(a, b, c)
findAllCommon(c, d)
if len(e)>0:
print("True")
else:
print("False")
Since I haven't seen a solution based on sets, so I suggest looking for how sets are implemented in your language of choice and do the equivalent of this:
set(a).intersection(b).intersection(c) != set([])
This evaluates to True if there is a common element, False otherwise. It runs in O(n) time.
All solutions so far either require O(n) additional space (creating a new array/set) or change the order of the arrays (sorting).
If you want to solve the problem in O(1) additional space and without changing the original arrays, you indeed can't do better than O(n^2) time:
foreach(var x in a) { // n iterations
if (b.Contains(x) && c.Contains(x)) return true; // max. 2n
} // O(n^2)
return false;
A suggestion:
Combine into a single array(z) where z = sum of the entries in each array. Keep track of how many entries there were in Array 1, Array 2, Array 3.
For each entry Z traverse the array to see how many duplicates there are within the combined array and where they are. For those which have 2 or more (ie there are 3 or more of the same number), check that the location of those duplicates correspond to having been in different arrays to begin with (ruling our duplicates within the original arrays). If your number Z has 2 or more duplicates and they are all in different arrays (checked through their position in the array) then store that number Z in result array.
Report result array.
You will traverse the entire combined array once and then almost (no need to check if Z is a duplicate of itself) traverse it again for each Z, so n^2 complexity.
Also worth noting that the time complexity will now be a function of total number of entries and not of number of arrays (your nested loops would become n^4 with 4 arrays - this will stay as n^2)
You could make it more efficient by always checking the already found duplicates before checking for a new Z - if the new Z is already found as a duplicate to an earlier Z you need not traverse to check for that number again. This will make it more efficient the more duplicates there are - with few duplicates the reduction in number of traverses is probably not worth the extra complexity.
Of course you could also do this without actually combining the values into a single array - you would just need to make sure that your traversing routine looks through the arrays and keeps track of what it finds the in the right order.
Edit
Thinking about it, the above is doing way more than you want. It would allow you to report on doubles, quads etc. as well.
If you just want triples, then it is much easier/quicker. Since a triple needs to be in all 3 arrays, you can start by finding those numbers which are in any of the 2 arrays (if they are different lengths, compare the 2 shortest arrays first) and then to check any doublets found against the third array. Not sure what that brings the complexity down to but it will be less than n^2...
there are many ways to solve this here few selected ones sorted by complexity (descending) assuming n is average size of your individual arrays:
Brute force O(n^3)
its basicaly the same as you do so test any triplet combination by 3 nested for loops
for(x=1;x<=n;x++)
for(y=1;y<=n;y++)
for(z=1;z<=n;z++)
if(a[x]==b[y]==c[z])
return true;
return false;
slightly optimized brute force O(n^2)
simply check if each element from a is in b and if yes then check if it is also in c which is O(n*(n+n)) = O(n^2) as the b and c loops are not nested anymore:
for(x=1;x<=n;x++)
{
for(ret=false,y=1;y<=n;y++)
if(a[x]==b[y])
{ ret=true; break; }
if (!ret) continue;
for(ret=false,z=1;z<=n;z++)
if(a[x]==c[z])
{ ret=true; break; }
if (ret) return true;
}
return false;
exploit sorting O(n.log(n))
simply sort all arrays O(n.log(n)) and then just traverse all 3 arrays together to test if each element is present in all arrays (single for loop, incrementing the smallest element array). This can be done also with binary search like one of the other answers suggest but that is slower still not exceeding n.log(n). Here the O(n) traversal:
for(x=1,y=1,z=1;(x<=n)&&(y<=n)&&(z<=n);)
{
if(a[x]==b[y]==c[z]) return true;
if ((a[x]<b[y])&&(a[x]<c[z])) x++;
else if ((b[y]<a[x])&&(b[y]<c[z])) y++;
else z++;
}
return false;
however this needs to change the contents of arrays or need additional arrays for index sorting instead (so O(n) space).
histogram based O(n+m)
this can be used only if the range of elements in your array is not too big. Let say the arrays can hold numbers 1 .. m then you add (modified) histogram holding set bit for each array where value is presen and simply check if value is present in all 3:
int h[m]; // histogram
for(i=1;i<=m;i++) h[i]=0; // clear histogram
for(x=1;x<=n;x++) h[a[x]]|=1;
for(y=1;y<=n;y++) h[b[y]]|=2;
for(z=1;z<=n;z++) h[c[z]]|=4;
for(i=1;i<=m;i++) if (h[i]==7) return true;
return false;
This needs O(m) space ...
So you clearly want option #2
Beware all the code is just copy pasted yours and modified directly in answer editor so there might be typos or syntax error I do not see right now...

Intersection of two unsorted integer arrays

Given two arrays, A and B, both of size (m). The numbers in the arrays are in range of [-n,n]. I need to find an algorithm that returns the intersection of A and B in O(m).
For example:
Assume that
A={1,2,14,14,5}
and
B={2,2,14,3}
the algorithm needs to return 2 and 14.
I've tried to define two Arrays with size (n), one stands for the positive numbers and the other for the negative numbers, and each index of the arrays represents the number.
I thought I could scan one array of A and B and sign each element with 1 in the arrays, and check directly the elements of the other array.
But it turns out that I only can use the arrays when I initialize them - which takes O(n).
What can I do to improve the algorithm?
This can be done with a set:
A_set = set(A)
print([b for b in B if b in A_set])
The construction of the set happens in O(m), checking each element of B needs O(m) time, so the total runtime complexity is O(m).
You will also need O(m) space to store the set.
If you can create an array or bit vector of 2n+1 Booleans in O(m) time or better, then you don't have to initialize the array before you use it:
Create an array S of 2n+1 elements.
For each element a in A, set S[a+n] = false
For each element b in B, set S[b+n] = true
Return the elements a of A for which S[a+n] == true
If you can't create that array in that time, then maybe you can use #fafl's answer, although that is only expected O(m) time unless the set implementation is very special.
The thing to be noticed is that there are duplicates in both the sets (I know that's not what set is, sets have unique entries, but the stated example in the question prompt to me that, say it's a multiset). Expecting the n <= 10^6. Build an array from -n to n, which can function as a hash map. '0' be the index for '-n', then 'i' be the index for 'i-n'. Iterate over array A, and on encountering a value, say c, hash[c+n]++. Now take an empty set for the result. Iterate over the array B. For each value c in B, if hash[c+n] > 0, then put it in the result set and decrement the count. Like this one can find the intersection in O(m).
It is recommended to manually implement a bit-set to solve the problem.
For instance:
min(Array A, Array B) = 1 AND
max(Array A, Array B) = 14 —
therefore, the bit-set is from min = 1 to max = 14 and Array A = :11001000000001 and Array B = :01100000000001 — *(Array A) &= *(Array B) = :01000000000001 which is 2 and 14.

Cannot understand CLRS Problem 4-2 case 2

The question:
4-2 Parameter-passing costs
Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N-element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself.
This problem examines the implications of three parameter-passing strategies:
An array is passed by pointer. Time Theta(1)
2. An array is passed by copying. Time Theta(N), where N is the size of the array.
An array is passed by copying only the subrange that might be accessed by the called procedure. Time Theta(q-p+1) if the subarray A[p..q] is passed.
a. Consider the recursive binary search algorithm for finding a number in a sorted array (see Exercise 2.3-5 ). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let N be the size of the original problem and n be the size of a subproblem.
b. Redo part (a) for the MERGE-SORT algorithm from Section 2.3.1.
I have trouble understanding how to solve the recurrence of case 2 where arrays are passed by copying for both algorithms.
Take the binary search algorithm of case 2 for example, the recurrence most answers give is T(n)=T(n / 2)+Theta(N). I have no trouble about that.
Here is an answer I find online that looks correct:
T(n)
= T(n/2) + cN
= 2cN + T(n/4)
= 3cN + T(n/8)
= Sigma(i = 0 to lgn - 1) (2^i cN / (2^i))
= cNlgn
= Theta(nlgn)
I have trouble understanding how the second last line can result in the last line's answer. Why not represent it in Theta(Nlgn)? And how can N become n in the Theta notation?
N and n feel somewhat connected. I have trouble understanding their connection and how that is applied in the solution.
Seems that N represents full array length and n is current chunk size.
But formulas really exploit initial value only, when you start from full length n=N - for example, look at T(n/4) for T(N/4), so n===N everywhere.
In this case you can replace n with N.
My answer will not be very theoretical, but maybe this "more empirical" approach will help you figure it out. Also check Master Theorem (analysis of algorithms) for easier analysis of recursive algorithms complexity
Let's solve the binary search first:
By pointer
O(logN) - acts like an iterative binary search, there will be logN calls each having O(1) complexity
Copying the whole array
O(NlogN) - since for each of the logN calls of the function we copy N elements
Copying only the accessed subarray
O(N) - this one is not that obvious, but can be easily seen that the copied subarrays will be of length, N, N/2, N/4, N/8 .... and summing all this terms will be 2*N
Now for the merge sort algorithm:
O(NlogN) - the same method as for a3, the iterated subarrays will be of lengths N, (N/2, N/2), (N/4, N/4, N/4, N/4) ...
O(N^2) - we make 2*N calls of the sorting function and each has a complexity of O(N) for copying the whole array
O(NlogN) - we will copy only the subarrays that we will iterate over, so the complexity will be as in b1

How to "invert" an array in linear time functionally rather than procedurally?

Say I have an array of integers A such that A[i] = j, and I want to "invert it"; that is, to create another array of integers B such that B[j] = i.
This is trivial to do procedurally in linear time in any language; here's a Python example:
def invert_procedurally(A):
B = [None] * (max(A) + 1)
for i, j in enumerate(A):
B[j] = i
return B
However, is there any way to do this functionally (as in functional programming, using map, reduce, or functions like those) in linear time?
The code might look something like this:
def invert_functionally(A):
# We can't modify variables in FP; we can only return a value
return map(???, A) # What goes here?
If this is not possible, what is the best (most efficient) alternative when doing functional programming?
In this context are arrays mutable or immutable? Generally I'd expect the mutable case to be about as straightforward as your Python implementation, perhaps aside from a few wrinkles with types. I'll assume you're more interested in the immutable scenario.
This operation inverts the indices and elements, so it's also important to know something about what constitutes valid array indices and impose those same constraints on the elements. Haskell has a class for index constraints called Ix. Any Ix type is ordered and has a range implementation to make an ordered list of indices ranging from one specified index to another. I think this Haskell implementation does what you want.
import Data.Array.IArray
invertArray :: (Ix x) => Array x x -> Array x x
invertArray arr = listArray (low,high) newElems
where oldElems = elems arr
newElems = indices arr
low = minimum oldElems
high = maximum oldElems
Under the hood listArray uses zipWith and range to associate indices in the specified range to the listed elements. That part ought to be linear time, and so is the one-time operation of extracting elements and indices from an array.
Whenever the sets of the input arrays indices and elements differ some elements will be undefined, which for better or worse blow up faster than Python's None. I believe you could overcome the undefined issue by implementing new Ix a instances over the Maybe monad, for instance.
Quick side-note: check out the invPerm example in the Haskell 98 Library Report. It does something similar to invertArray, but assumes up front that input array's elements are a permutation of its indices.
A solution needing mapand 3 operations:
toTuples views an the array as a list of tuples (i,e) where i is the index and e the element in the array at that index.
fromTuples creates and loads an array from a list of tuples.
swap which takes a tuple (a,b) and returns (b,a)
Hence the solution would be (in Haskellish notation):
invert = fromTuples . map swap . toTuples

Compare two integer arrays with same length

[Description] Given two integer arrays with the same length. Design an algorithm which can judge whether they're the same. The definition of "same" is that, if these two arrays were in sorted order, the elements in corresponding position should be the same.
[Example]
<1 2 3 4> = <3 1 2 4>
<1 2 3 4> != <3 4 1 1>
[Limitation] The algorithm should require constant extra space, and O(n) running time.
(Probably too complex for an interview question.)
(You can use O(N) time to check the min, max, sum, sumsq, etc. are equal first.)
Use no-extra-space radix sort to sort the two arrays in-place. O(N) time complexity, O(1) space.
Then compare them using the usual algorithm. O(N) time complexity, O(1) space.
(Provided (max − min) of the arrays is of O(Nk) with a finite k.)
You can try a probabilistic approach - convert the arrays into a number in some huge base B and mod by some prime P, for example sum B^a_i for all i mod some big-ish P. If they both come out to the same number, try again for as many primes as you want. If it's false at any attempts, then they are not correct. If they pass enough challenges, then they are equal, with high probability.
There's a trivial proof for B > N, P > biggest number. So there must be a challenge that cannot be met. This is actually the deterministic approach, though the complexity analysis might be more difficult, depending on how people view the complexity in terms of the size of the input (as opposed to just the number of elements).
I claim that: Unless the range of input is specified, then it is IMPOSSIBLE to solve in onstant extra space, and O(n) running time.
I will be happy to be proven wrong, so that I can learn something new.
Insert all elements from the first array into a hashtable
Try to insert all elements from the second array into the same hashtable - for each insert to element should already be there
Ok, this is not with constant extra space, but the best I could come up at the moment:-). Are there any other constraints imposed on the question, like for example to biggest integer that may be included in the array?
A few answers are basically correct, even though they don't look like it. The hash table approach (for one example) has an upper limit based on the range of the type involved rather than the number of elements in the arrays. At least by by most definitions, that makes the (upper limit on) the space a constant, although the constant may be quite large.
In theory, you could change that from an upper limit to a true constant amount of space. Just for example, if you were working in C or C++, and it was an array of char, you could use something like:
size_t counts[UCHAR_MAX];
Since UCHAR_MAX is a constant, the amount of space used by the array is also a constant.
Edit: I'd note for the record that a bound on the ranges/sizes of items involved is implicit in nearly all descriptions of algorithmic complexity. Just for example, we all "know" that Quicksort is an O(N log N) algorithm. That's only true, however, if we assume that comparing and swapping the items being sorted takes constant time, which can only be true if we bound the range. If the range of items involved is large enough that we can no longer treat a comparison or a swap as taking constant time, then its complexity would become something like O(N log N log R), were R is the range, so log R approximates the number of bits necessary to represent an item.
Is this a trick question? If the authors assumed integers to be within a given range (2^32 etc.) then "extra constant space" might simply be an array of size 2^32 in which you count the occurrences in both lists.
If the integers are unranged, it cannot be done.
You could add each element into a hashmap<Integer, Integer>, with the following rules: Array A is the adder, array B is the remover. When inserting from Array A, if the key does not exist, insert it with a value of 1. If the key exists, increment the value (keep a count). When removing, if the key exists and is greater than 1, reduce it by 1. If the key exists and is 1, remove the element.
Run through array A followed by array B using the rules above. If at any time during the removal phase array B does not find an element, you can immediately return false. If after both the adder and remover are finished the hashmap is empty, the arrays are equivalent.
Edit: The size of the hashtable will be equal to the number of distinct values in the array does this fit the definition of constant space?
I imagine the solution will require some sort of transformation that is both associative and commutative and guarantees a unique result for a unique set of inputs. However I'm not sure if that even exists.
public static boolean match(int[] array1, int[] array2) {
int x, y = 0;
for(x = 0; x < array1.length; x++) {
y = x;
while(array1[x] != array2[y]) {
if (y + 1 == array1.length)
return false;
y++;
}
int swap = array2[x];
array2[x] = array2[y];
array2[y] = swap;
}
return true;
}
For each array, Use Counting sort technique to build the count of number of elements less than or equal to a particular element . Then compare the two built auxillary arrays at every index, if they r equal arrays r equal else they r not . COunting sort requires O(n) and array comparison at every index is again O(n) so totally its O(n) and the space required is equal to the size of two arrays . Here is a link to counting sort http://en.wikipedia.org/wiki/Counting_sort.
given int are in the range -n..+n a simple way to check for equity may be the following (pseudo code):
// a & b are the array
accumulator = 0
arraysize = size(a)
for(i=0 ; i < arraysize; ++i) {
accumulator = accumulator + a[i] - b[i]
if abs(accumulator) > ((arraysize - i) * n) { return FALSE }
}
return (accumulator == 0)
accumulator must be able to store integer with range = +- arraysize * n
How 'bout this - XOR all the numbers in both the arrays. If the result is 0, you got a match.

Resources