Goal
I would like to write an algorithm (in C) which returns TRUE or FALSE (1 or 0) depending whether the array A given in input can “sum and/or sub” to x (see below for clarification). Note that all values of A are integers bounded between [1,x-1] that were randomly (uniformly) sampled.
Clarification and examples
By “sum and/or sub”, I mean placing "+" and "-" in front of each element of array and summing over. Let's call this function SumSub.
int SumSub (int* A,int x)
{
...
}
SumSub({2,7,5},10)
should return TRUE as 7-2+5=10. You will note that the first element of A can also be taken as negative so that the order of elements in A does not matter.
SumSub({2,7,5,2},10)
should return FALSE as there is no way to “sum and/or sub” the elements of array to reach the value of x. Please note, this means that all elements of A must be used.
Complexity
Let n be the length of A. Complexity of the problem is of order O(2^n) if one has to explore all possible combinations of pluses and minus. However, some combinations are more likely than others and therefore are worth being explored first (hoping the output will be TRUE). Typically, the combination which requires substracting all elements from the largest number is impossible (as all elements of A are lower than x). Also, if n>x, it makes no sense to try adding all the elements of A.
Question
How should I go about writing this function?
Unfortunately your problem can be reduced to subset-sum problem which is NP-Complete. Thus the exponential solution can't be avoided.
The original problem's solution is indeed exponential as you said. BUT with the given range[1,x-1] for numbers in A[] you can make the solution polynomial. There is a very simple dynamic programming solution.
With the order:
Time Complexity: O(n^2*x)
Memory Complexity: O(n^2*x)
where, n=num of elements in A[]
You need to use dynamic programming approach for this
You know the min,max range that can be made in in the range [-nx,nx]. Create a 2d array of size (n)X(2*n*x+1). Lets call this dp[][]
dp[i][j] = taking all elements of A[] from [0..i-1] whether its possible to make the value j
so
dp[10][3] = 1 means taking first 10 elements of A[] we CAN create the value 3
dp[10][3] = 0 means taking first 10 elements of A[] we can NOT create the value 3
Here is a kind of pseudo code for this:
int SumSub (int* A,int x)
{
bool dp[][];//set all values of this array 0
dp[0][0] = true;
for(i=1;i<=n;i++) {
int val = A[i-1];
for(j=-n*x;j<=n*x;j++) {
dp[i][j]=dp[ i-1 ][ j + val ] | dp[ i-1 ][ j - val ];
}
}
return dp[n][x];
}
Unfortunately this is NP-complete even when x is restricted to the value 0, so don't expect a polynomial-time algorithm. To show this I'll give a simple reduction from the NP-hard Partition Problem, which asks whether a given multiset of positive integers can be partitioned into two parts having equal sums:
Suppose we have an instance of the Partition Problem consisting of n positive integers B_1, ..., B_n. Create from this an instance of your problem in which A_i = B_i for each 1 <= i <= n, and set x = 0.
Clearly if there is a partition of B into two parts C and D having equal sums, then there is also a solution to the instance of your problem: Put a + in front of every number in C, and a - in front of every number in D (or the other way round). Since C and D have equal sums, this expression must equal 0.
OTOH, if the solution to the instance of your problem that we just created is YES (TRUE), then we can easily create a partition of B into two parts having equal sums: just put all the positive terms in one part (say, C), and all the negative terms (without the preceding - of course) in the other (say, D). Since we know that the total value of the expression is 0, it must be that the sum of the (positive) numbers in C is equal to the (negated) sum of the numbers in D.
Thus a YES to either problem instance implies a YES to the other problem instance, which in turn implies that a NO to either problem instance implies a NO to the other problem instance -- that is, the two problem instances have equal solutions. Thus if it were possible to solve your problem in polynomial time, it would be possible to solve the NP-hard Partition Problem in polynomial time too, by constructing the above instance of your problem, solving it with your poly-time algorithm, and reporting the result it gives.
Related
Given two arrays, A and B, both of size (m). The numbers in the arrays are in range of [-n,n]. I need to find an algorithm that returns the intersection of A and B in O(m).
For example:
Assume that
A={1,2,14,14,5}
and
B={2,2,14,3}
the algorithm needs to return 2 and 14.
I've tried to define two Arrays with size (n), one stands for the positive numbers and the other for the negative numbers, and each index of the arrays represents the number.
I thought I could scan one array of A and B and sign each element with 1 in the arrays, and check directly the elements of the other array.
But it turns out that I only can use the arrays when I initialize them - which takes O(n).
What can I do to improve the algorithm?
This can be done with a set:
A_set = set(A)
print([b for b in B if b in A_set])
The construction of the set happens in O(m), checking each element of B needs O(m) time, so the total runtime complexity is O(m).
You will also need O(m) space to store the set.
If you can create an array or bit vector of 2n+1 Booleans in O(m) time or better, then you don't have to initialize the array before you use it:
Create an array S of 2n+1 elements.
For each element a in A, set S[a+n] = false
For each element b in B, set S[b+n] = true
Return the elements a of A for which S[a+n] == true
If you can't create that array in that time, then maybe you can use #fafl's answer, although that is only expected O(m) time unless the set implementation is very special.
The thing to be noticed is that there are duplicates in both the sets (I know that's not what set is, sets have unique entries, but the stated example in the question prompt to me that, say it's a multiset). Expecting the n <= 10^6. Build an array from -n to n, which can function as a hash map. '0' be the index for '-n', then 'i' be the index for 'i-n'. Iterate over array A, and on encountering a value, say c, hash[c+n]++. Now take an empty set for the result. Iterate over the array B. For each value c in B, if hash[c+n] > 0, then put it in the result set and decrement the count. Like this one can find the intersection in O(m).
It is recommended to manually implement a bit-set to solve the problem.
For instance:
min(Array A, Array B) = 1 AND
max(Array A, Array B) = 14 —
therefore, the bit-set is from min = 1 to max = 14 and Array A = :11001000000001 and Array B = :01100000000001 — *(Array A) &= *(Array B) = :01000000000001 which is 2 and 14.
I have an algorithm for Sequential search of an unsorted array:
SequentialSearch(A[0..n-1],K)
i=0
while i < n and A[i] != K do
i = i+1
if i < n then return i
else return -1
Where we have an input array A[0...n-1] and a search key K
I know that the worst case is n, because we would have to search the entire array, hence n items O(n)
I know that the best case is 1, since that would mean the first item we search is the one we want, or the array has all the same items, either way it's O(1)
But I have no idea on how to calculate the average case. The answer my textbook gives is:
= (p/n)[1+2+...+i+...+n] + n(1-p)
is there a general formula I can follow for when I see an algorithm like this one, to calculate it?
PICTURE BELOW
Textbook example
= (p/n)[1+2+...+i+...+n] + n(1-p)
p here is the probability of an search key found in the array, since we have n elements, we have p/n as the probability of finding the key at the particular index within n . We essentially doing weighted average as in each iteration, we weigh in 1 comparison, 2 comparison, and until n comparison. Because we have to take all inputs into account, the second part n(1-p) tells us the probability of input that doesn't exist in the array 1-p. and it takes n as we search through the entire array.
You'd need to consider the input cases, something like equivalence classes of input, which depends on the context of the algorithm. If none of those things are known, then assuming that the input is an array of random integers, the average case would probably be O(n). This is because, roughly, you have no way of proving to a useful extent how often your query will be found in an array of N integer values in the range of ~-32k to ~32k.
More formally, let X be a discrete random variable denoting the number of elements of the array A that are needed to be scanned. There are n elements and since all positions are equally likely for inputs generated randomly, X ~ Uniform(1,n) where X = 1,..,n, given that search key is found in the array (with probability p), otherwise all the elements need to be scanned, with X=n (with probability 1-p).
Hence, P(X=x)=(1/n).p.I{x<n}+((1/n).p+(1-p)).I{x=n} for x = 1,..,n, where I{x=n} is the indicator function and will have value 1 iff x=n otherwise 0.
Average time complexity of the algorithm is the expected time taken to execute the algorithm when the input is an arbitrary sequence. By definition,
The following figure shows how time taken for searching the array changes with n and p.
This is an interview question. Say you have an array of four ints named A, and also this function:
int check(int x, int y){
if (x<=y) return 1;
return 0;
}
Now, you want to create a function that will sort A, and you can use only the function checkfor comparisons. How many calls for check do you need?
(It is ok to return a new array for result).
I found that I can do this in 5 calls. Is it possible to do it with less calls (on worst case)?
This is how I thought of doing it (pseudo code):
int[4] B=new int[4];
/*
The idea: put minimum values in even cells and maximum values in odd cells using check.
Then swap (if needed) between minimum values and also between maximum values.
And finally, swap the second element (max of minimums)
and the third element (min of maximums) if needed.
*/
if (check(A[0],A[1])==1){ //A[0]<=A[1]
B[0]=A[0];
B[2]=A[1];
}
else{
B[0]=A[1];
B[2]=A[0];
}
if (check(A[2],A[3])==1){ //A[2]<=A[3]
B[1]=A[2];
B[3]=A[3];
}
else{
B[1]=A[3];
B[3]=A[2];
}
if (check(B[0],B[1])==0){ //B[0]>B[1]
swap(B[0],B[1]);
}
if (check(B[2],B[3])==0){ //B[2]>B[3]
swap(B[2],B[3]);
}
if (check(B[1],B[2])==0){ // B[1]>B[2]
swap(B[1],B[2]);
}
There are 24 possible orderings of a 4-element list. (4 factorial) If you do only 4 comparisons, then you can get only 4 bits of information, which is enough to distinguish between 16 different cases, which isn't enough to cover all the possible output cases. Therefore, 5 comparisons is the optimal worst case.
In The Art of Computer Programming, p. 183 (Section 3.5.1), Donald Knuth has the following table of lower and upper bounds on the minimum numbers of comparisons:
The ceil(ln n!) is the "information theoretic" lower bound, whereas B(n) is the maximum number of comparisons in an insertion binary sort. Since the lower and upper bounds are equal for n=4, 5 comparisons are needed.
The information theoretic bound is derived by recognizing that there are n! possible orderings of n unique items. We distinguish these cases by asking S yes-no questions in the form of is X<Y?. These questions form a tree which has at most 2^S tips. We need n!<=2^S; solving for S gives ceil(lg(n!)).
Incidentally, you can use Stirling's approximation to show that this implies that sorting requires O(n log n) time.
The rest of the section goes on to describe a number of approaches to creating these bounds and studying this question, though work is on-going (see, for instance Peczarski (2011)).
I am looking for an algorithm to solve the following problem: We are given an integer array of size n which contains k (0 < k < n) many elements exactly once. Every other integer occurs an even number of times in the array. The output should be any of the k unique numbers. k is a fixed number and not part of the input.
An example would be the input [1, 2, 2, 4, 4, 2, 2, 3] with both 1 and 3 being a correct output.
Most importantly, the algorithm should run in O(n) time and require only O(1) additional space.
edit: There has been some confusion regarding whether there is only one unique integer or multiple. I apologize for this. The correct problem is that there is an arbitrary but fixed amount. I have updated the original question above.
"Dante." gave a good answer for the case that there are at most two such numbers. This link also provides a solution for three. "David Eisenstat" commented that it is also possible to do for any fixed k. I would be grateful for a solution.
There is a standard algorithm to solve such problems using XOR operator:
Time Complexity = O(n)
Space Complexity = O(1)
Suppose your input array contains only one element that occurs odd no of times and rest occur even number of times,we take advantage of the following fact:
Any expression having even number of 0's and 1's in any order will always be = 0 when xor is applied.
That is
0^1^....... = 0 as long as number of 0 is even and number of 1 is even
and 0 and 1 can occur in any order.
Because all numbers that occur even number of times will have their corresponding bits form even number of 1's and 0's and only the number which occurs only once will have its bit left out when we take xor of all elements of array because
0(from no's occuring even times)^1(from no occuring once) = 1
0(from no's occuring even times)^0(from no occuring once) = 0
as you can see the bit of only the number occuring once is preserved.
This means when given such an array and you take xor of all the elements,the result is the number which occurs only once.
So the algorithm for array of length n is:
result = array[0]^array[1]^.....array[n-1]
Different Scenario
As the OP mentioned that input can also be an array which has two numbers occuring only once and rest occur even number of times.
This is solved using the same logic as above but with little difference.
Idea of algorithm:
If you take xor of all the elements then definitely all the bits of elements occuring even number of times will result in 0,which means:
The result will have its bit 1 only at that bit position where the bits of the two numbers occuring only once differ.
We will use the above idea.
Now we focus on the resultant xor bit which is 1(any bit which is 1) and make rest 0.The result is a number which will allow us to differentiate between the two numbers(the required ones).
Because the bit is 1,it means they differ at this position,it means one will have 0 at this position and one will have 1.This means one number when taken AND results in 0 and one does not.
Since it is very easy to set the right most bit,we set it of the result xor as
A = result & ~(result-1)
Now traverse through the array once and if array[i]&A is 0 store the number in variable number_1 as
number_1 = number_1^array[i]
otherwise
number_2 = number_2^array[i]
Because the remaining numbers occur even number of times,their bit will automatically disappear.
So the algorithm is
1.Take xor of all elements,call it xor.
2.Set the rightmost bit of xor and store it in B.
3.Do the following:
number_1=0,number_2=0;
for(i = 0 to n-1)
{
if(array[i] & B)
number_1 = number_1^array[i];
else
number_2 = number_2^array[i];
}
The number_1 and number_2 are the required numbers.
Here's a Las Vegas algorithm that, given k, the exact number of elements that occur an odd number of times, reports all of them in expected time O(n k) (read: linear-time when k is O(1)) and space O(1) words, assuming that "give me a uniform random word" and "give me the number of 1 bits set in this word (popcount)" are constant-time operations. I'm pretty sure that I'm not the first person to come up with this algorithm (and I'm not even sure that I'm remembering all of the refinements), but I've reached the limits of my patience trying to find it.
The central technique is called random restrictions. Essentially what we do is to filter the input randomly by value, in the hope that we retain exactly one odd-count element. We apply the classic XOR algorithm to the filtered array and check the result; if it succeeded, then we pretend to add it to the array, to make it even-count. Repeat until all k elements are found.
The filtration process goes like this. Treat each input word x as a binary vector of length w (doesn't matter what w is). Compute a random binary matrix A of size w by ceil(1 + lg k) and a random binary vector b of length ceil(1 + lg k). We filter the input by retaining those x such that Ax = b, where the left-hand side is a matrix multiplication mod 2. In implementation, A is represented as ceil(1 + lg k) vectors a1, a2, .... We compute the bits of Ax as popcount(a1 ^ x), popcount(a2 ^ x), .... (This is convenient because we can short-circuit the comparison with b, which shaves a factor lg k from the running time.)
The analysis is to show that, in a given pass, we manage with constant probability to single out one of the odd-count elements. First note that, for some fixed x, the probability that Ax = b is 2-ceil(1 + lg k) = Θ(1/k). Given that Ax = b, for all y ≠ x, the probability that Ay = b is less than 2-ceil(1 + lg k). Thus, the expected number of elements that accompany x is less than 1/2, so with probability more than 1/2, x is unique in the filtered input. Sum over all k odd-count elements (these events are disjoint), and the probability is Θ(1).
Here's a deterministic linear-time algorithm for k = 3. Let the odd-count elements be a, b, c. Accumulate the XOR of the array, which is s = a ^ b ^ c. For each bit i, observe that, if a[i] == b[i] == c[i], then s[i] == a[i] == b[i] == c[i]. Make another pass through the array, accumulate the XOR of the lowest bit set in s ^ x. The even-count elements contribute nothing again. Two of the odd-count elements contribute the same bit and cancel each other out. Thus, the lowest bit set in the XOR is where exactly one of the odd-count elements differs from s. We can use the restriction method above to find it, then the k = 2 method to find the others.
The question title says "the unique integer", but the question body says there can be more than one unique element.
If there is in fact only one non-duplicate: XOR all the elements together. The duplicates all cancel, because they come in pairs (or higher multiples of 2), so the result is the unique integer.
See Dante's answer for an extension of this idea that can handle two unique elements. It can't be generalized to more than that.
Perhaps for k unique elements, we could use k accumulators to track sum(a[i]**k). i.e. a[i], a[i]2, etc. This probably only works for Faster algorithm to find unique element between two arrays?, not this case where the duplicates are all in one array. IDK if an xor of squares, cubes, etc. would be any use for resolving things.
Track the counts for each element and only return the elements with a count of 1. This can be done with a hash map. The below example tracks the result using a hash set while it's still building the counts map. Still O(n) but less efficient, but I think it's slightly more instructive.
Javascript with jsfiddle http://jsfiddle.net/nmckchsa/
function findUnique(arr) {
var uniq = new Map();
var result = new Set();
// iterate through array
for(var i=0; i<arr.length; i++) {
var v = arr[i];
// add value to map that contains counts
if(uniq.has(v)) {
uniq.set(v, uniq.get(v) + 1);
// count is greater than 1 remove from set
result.delete(v);
} else {
uniq.set(v, 1);
// add a possibly uniq value to the set
result.add(v);
}
}
// set to array O(n)
var a = [], x = 0;
result.forEach(function(v) { a[x++] = v; });
return a;
}
alert(findUnique([1,2,3,0,1,2,3,1,2,3,5,4,4]));
EDIT Since the non-uniq numbers appear an even number of times #PeterCordes suggested a more elegant set toggle.
Here's how that would look.
function findUnique(arr) {
var result = new Set();
// iterate through array
for(var i=0; i<arr.length; i++) {
var v = arr[i];
if(result.has(v)) { // even occurances
result.delete(v);
} else { // odd occurances
result.add(v);
}
}
// set to array O(n)
var a = [], x = 0;
result.forEach(function(v) { a[x++] = v; });
return a;
}
JSFiddle http://jsfiddle.net/hepsyqyw/
Assuming you have an input array: [2,3,4,2,4]
Output: 3
In Ruby, you can do something as simple as this:
[2,3,4,2,4].inject(0) {|xor, v| xor ^ v}
Create an array counts that has INT_MAX slots, with each element initialized to zero.
For each element in the input list, increment counts[element] by one. (edit: actually, you will need to do counts[element] = (counts_element+1)%2, or else you might overflow the value for really ridiculously large values of N. It's acceptable to do this kind of modulus counting because all duplicate items appear an even number of times)
Iterate through counts until you find a slot that contains "1". Return the index of that slot.
Step 2 is O(N) time. Steps 1 and 3 take up a lot of memory and a lot of time, but neither one is proportional to the size of the input list, so they're still technically O(1).
(note: this assumes that integers have a minimum and maximum value, as is the case for many programming languages.)
[Description] Given two integer arrays with the same length. Design an algorithm which can judge whether they're the same. The definition of "same" is that, if these two arrays were in sorted order, the elements in corresponding position should be the same.
[Example]
<1 2 3 4> = <3 1 2 4>
<1 2 3 4> != <3 4 1 1>
[Limitation] The algorithm should require constant extra space, and O(n) running time.
(Probably too complex for an interview question.)
(You can use O(N) time to check the min, max, sum, sumsq, etc. are equal first.)
Use no-extra-space radix sort to sort the two arrays in-place. O(N) time complexity, O(1) space.
Then compare them using the usual algorithm. O(N) time complexity, O(1) space.
(Provided (max − min) of the arrays is of O(Nk) with a finite k.)
You can try a probabilistic approach - convert the arrays into a number in some huge base B and mod by some prime P, for example sum B^a_i for all i mod some big-ish P. If they both come out to the same number, try again for as many primes as you want. If it's false at any attempts, then they are not correct. If they pass enough challenges, then they are equal, with high probability.
There's a trivial proof for B > N, P > biggest number. So there must be a challenge that cannot be met. This is actually the deterministic approach, though the complexity analysis might be more difficult, depending on how people view the complexity in terms of the size of the input (as opposed to just the number of elements).
I claim that: Unless the range of input is specified, then it is IMPOSSIBLE to solve in onstant extra space, and O(n) running time.
I will be happy to be proven wrong, so that I can learn something new.
Insert all elements from the first array into a hashtable
Try to insert all elements from the second array into the same hashtable - for each insert to element should already be there
Ok, this is not with constant extra space, but the best I could come up at the moment:-). Are there any other constraints imposed on the question, like for example to biggest integer that may be included in the array?
A few answers are basically correct, even though they don't look like it. The hash table approach (for one example) has an upper limit based on the range of the type involved rather than the number of elements in the arrays. At least by by most definitions, that makes the (upper limit on) the space a constant, although the constant may be quite large.
In theory, you could change that from an upper limit to a true constant amount of space. Just for example, if you were working in C or C++, and it was an array of char, you could use something like:
size_t counts[UCHAR_MAX];
Since UCHAR_MAX is a constant, the amount of space used by the array is also a constant.
Edit: I'd note for the record that a bound on the ranges/sizes of items involved is implicit in nearly all descriptions of algorithmic complexity. Just for example, we all "know" that Quicksort is an O(N log N) algorithm. That's only true, however, if we assume that comparing and swapping the items being sorted takes constant time, which can only be true if we bound the range. If the range of items involved is large enough that we can no longer treat a comparison or a swap as taking constant time, then its complexity would become something like O(N log N log R), were R is the range, so log R approximates the number of bits necessary to represent an item.
Is this a trick question? If the authors assumed integers to be within a given range (2^32 etc.) then "extra constant space" might simply be an array of size 2^32 in which you count the occurrences in both lists.
If the integers are unranged, it cannot be done.
You could add each element into a hashmap<Integer, Integer>, with the following rules: Array A is the adder, array B is the remover. When inserting from Array A, if the key does not exist, insert it with a value of 1. If the key exists, increment the value (keep a count). When removing, if the key exists and is greater than 1, reduce it by 1. If the key exists and is 1, remove the element.
Run through array A followed by array B using the rules above. If at any time during the removal phase array B does not find an element, you can immediately return false. If after both the adder and remover are finished the hashmap is empty, the arrays are equivalent.
Edit: The size of the hashtable will be equal to the number of distinct values in the array does this fit the definition of constant space?
I imagine the solution will require some sort of transformation that is both associative and commutative and guarantees a unique result for a unique set of inputs. However I'm not sure if that even exists.
public static boolean match(int[] array1, int[] array2) {
int x, y = 0;
for(x = 0; x < array1.length; x++) {
y = x;
while(array1[x] != array2[y]) {
if (y + 1 == array1.length)
return false;
y++;
}
int swap = array2[x];
array2[x] = array2[y];
array2[y] = swap;
}
return true;
}
For each array, Use Counting sort technique to build the count of number of elements less than or equal to a particular element . Then compare the two built auxillary arrays at every index, if they r equal arrays r equal else they r not . COunting sort requires O(n) and array comparison at every index is again O(n) so totally its O(n) and the space required is equal to the size of two arrays . Here is a link to counting sort http://en.wikipedia.org/wiki/Counting_sort.
given int are in the range -n..+n a simple way to check for equity may be the following (pseudo code):
// a & b are the array
accumulator = 0
arraysize = size(a)
for(i=0 ; i < arraysize; ++i) {
accumulator = accumulator + a[i] - b[i]
if abs(accumulator) > ((arraysize - i) * n) { return FALSE }
}
return (accumulator == 0)
accumulator must be able to store integer with range = +- arraysize * n
How 'bout this - XOR all the numbers in both the arrays. If the result is 0, you got a match.