Mutating an array without extra space - arrays

I was given the following question in an interview, and couldn't find the solution.
Given is an array of chars length n, and "important section" (all chars in this section must be saved) length m where n >= m >= 0 as follows:
Without extra space, perform the following process:
Remove all occurrences of A and duplicate all occurrences of B, return a sub array of the mutated array. For example, for the above array [C,A,X,B,B,F,Q] n=7, m=5 ,output will be [C,X,B,B,B,B]. Note that the mutated array length is 6, since Q was in the redundant section and B was duplicated.
Return -1 if the operation can't be performed.
Examples:
n=2, m=2 , [A,B] => [B,B]
n=2, m=2 , [B,B] => -1 (since the result [B,B,B,B] is larger then the array)
n=3, m=2 , [A,B,C] => [B,B]
n=3, m=3 , [A,B,C] => [B,B,C]
n=3, m=2 , [Z,B,A] => [Z,B,B] (since A was in the redundant section)
Looking for a code example, Could this be done in O(n) time complexity?

Scan array to determine if is it possible to store mutated array in available space -- count As and B, and check N-M >= numB-numA
Walk array left to right: Shift elements to the left by the number of As so far (filling places of A)
Walk array right to left: Shift elements to the right by numB-B_so_far, inserting additional Bs

Start from the end of the input array. We will figure out from the back to the front what to fill in.
Look at the last significant character in the input (position m). If it is a, ignore it. Otherwise, add the symbol. Repeat until you read all the input.
This removes as. Now we will duplicate bs.
Start from the beginning of the array. Find the last value you wrote during the above steps. If it is a b, write two bs. If it is something else, just write one of them. Repeat. NOTE: if you ever "catch up", needing to write where you need to read, you don't have enough room and you output -1. Otherwise, return the part of the array from position 1 to the last read position.
Example:
Phase 1: removing A
CAXBBFQ
CAXBBFB
CAXBBBB
CAXBXBB
CAXCXBB
Phase 2: duplicating B
CAXCXBB
CXXCXBB
CXBBXBB
CXBBBBB
^^^^^^
Phase 1 is linear (we read m symbols and write no more than m).
Phase 2 is linear (we read fewer than m symbols and write no more than 2m).
m is less than n so everything is O(m) and O(n).

The code, with some optimizations, would look something like this, O(n):
// returns length of the relevant part of the mutated array or -1
public static int mutate(char[] a, int m) {
// delete As and count Bs in the relevant part
int bCount = 0, position = 0;
for (int i = 0; i < m; i++) {
if (a[i] != 'A') {
if (a[i] == 'B')
bCount++;
a[position++] = a[i];
}
}
// check if it is possible
int n = bCount + position;
if (n > a.length)
return -1;
// duplicate the Bs in the relevant part
for (int i = position - 1, index = n - 1; i >= 0; i--) {
if (a[i] != 'B') {
a[index--] = a[i];
} else {
a[index--] = 'B';
a[index--] = 'B';
}
}
return n;
}

Related

How do you reorganize an array within O(n) runtime & O(1) space complexity?

I'm a 'space-complexity' neophyte and was given a problem.
Suppose I have an array of arbitrary integers:
[1,0,4,2,1,0,5]
How would I reorder this array to have all the zeros at one end:
[1,4,2,1,5,0,0]
...and compute the count of non-zero integers (in this case: 5)?
... in O(n) runtime with O(1) space complexity?
I'm not good at this.
My background is more environmental engineering than computer science so I normally think in the abstract.
I thought I could do a sort, then count the non-zero integers.
Then I thought I could merely do a element-per-element copy as I re-arrange the array.
Then I thought something like a bubble sort, switching neighboring elements till I reached the end with the zeroes.
I thought I could save on the 'space-complexity' via shift array-members' addresses, being that the array point points to the array, with offsets to its members.
I either enhance the runtime at the expense of the space complexity or vice versa.
What's the solution?
Two-pointer approach will solve this task and keep within the time and memory constraints.
Start by placing one pointer at the end, another at the start of the array. Then decrement the end pointer until you see the first non-zero element.
Now the main loop:
If the start pointer points to zero, swap it with the value pointed
by the end pointer; then decrement the end pointer.
Always increment the start pointer.
Finish when start pointer becomes greater than or equal to the end
pointer.
Finally, return the position of the start pointer - that's the number of nonzero elements.
This is the Swift code for the smart answer provided by #kfx
func putZeroesToLeft(inout nums: [Int]) {
guard var firstNonZeroIndex: Int = (nums.enumerate().filter { $0.element != 0 }).first?.index else { return }
for index in firstNonZeroIndex..<nums.count {
if nums[index] == 0 {
swap(&nums[firstNonZeroIndex], &nums[index])
firstNonZeroIndex += 1
}
}
}
Time complexity
There are 2 simple (not nested) loops repeated max n times (where n is the length of input array). So time is O(n).
Space complexity
Beside the input array we only use the firstAvailableSlot int var. So the space is definitely a constant: O(1).
As indicated by the other answers, the idea is to have two pointers, p and q, one pointing at the end of the array (specifically at the first nonzero entry from behind) and the other pointing at the beginning of the array. Scan the array with q, each time you hit a 0, swap elements pointed to by p and q, increment p and decrement q (specifically, make it point to the next nonzero entry from behind); iterate as long as p < q.
In C++, you could do something like this:
void rearrange(std::vector<int>& v) {
int p = 0, q = v.size()-1;
// make q point to the right position
while (q >= 0 && !v[q]) --q;
while (p < q) {
if (!v[p]) { // found a zero element
std::swap(v[p], v[q]);
while (q >= 0 && !v[q]) --q; // make q point to the right position
}
++p;
}
}
Start at the far end of the array and work backwards. First scan until you hit a nonzero (if any). Keep track of the location of this nonzero. Keep scanning. Whenever you encounter a zero -- swap. Otherwise increase the count of nonzeros.
A Python implementation:
def consolidateAndCount(nums):
count = 0
#first locate last nonzero
i = len(nums)-1
while nums[i] == 0:
i -=1
if i < 0:
#no nonzeros encountered
return 0
count = 1 #since a nonzero was encountered
for j in range(i-1,-1,-1):
if nums[j] == 0:
#move to end
nums[j], nums[i] = nums[i],nums[j] #swap is constant space
i -=1
else:
count += 1
return count
For example:
>>> nums = [1,0,4,2,1,0,5]
>>> consolidateAndCount(nums)
5
>>> nums
[1, 5, 4, 2, 1, 0, 0]
The suggested answers with 2 pointers and swapping are changing the order of non-zero array elements which is in conflict with the example provided. (Although he doesn't name that restriction explicitly, so maybe it is irrelevant)
Instead, go through the list from left to right and keep track of the number of 0s encountered so far.
Set counter = 0 (zeros encountered so far).
In each step, do the following:
Check if the current element is 0 or not.
If the current element is 0, increment the counter.
Otherwise, move the current element by counter to the left.
Go to the next element.
When you reach the end of the list, overwrite the values from array[end-counter] to the end of the array with 0s.
The number of non-zero integers is the size of the array minus the counted zeros.
This algorithm has O(n) time complexity as we go at most twice through the whole array (array of all 0s; we could modify the update scheme a little to only go through at most exactly once though). It only uses an additional variable for counting which satisfies the O(1) space constraint.
Start with iterating over the array (say, i) and maintaining count of zeros encountered (say zero_count) till now.
Do not increment the iterative counter when the current element is 0. Instead increment zero_count.
Copy the value in i + zero_count index to the current index i.
Terminate the loop when i + zero_count is greater than array length.
Set the remaining array elements to 0.
Pseudo code:
zero_count = 0;
i = 0;
while i + zero_count < arr.length
if (arr[i] == 0) {
zero_count++;
if (i + zero_count < arr.length)
arr[i] = arr[i+zero_count]
} else {
i++;
}
while i < arr.length
arr[i] = 0;
i++;
Additionally, this also preserves the order of non-zero elements in the array,
You can actually solve a more generic problem called the Dutch national flag problem, which is used to in Quicksort. It partitions an array into 3 parts according to a given mid value. First, place all numbers less than mid, then all numbers equal to mid and then all numbers greater than mid.
Then you can pick the mid value as infinity and treat 0 as infinity.
The pseudocode given by the above link:
procedure three-way-partition(A : array of values, mid : value):
i ← 0
j ← 0
n ← size of A - 1
while j ≤ n:
if A[j] < mid:
swap A[i] and A[j]
i ← i + 1
j ← j + 1
else if A[j] > mid:
swap A[j] and A[n]
n ← n - 1
else:
j ← j + 1

Applying a function on sorted array

Taken from the google interview question here
Suppose that you have a sorted array of integers (positive or negative). You want to apply a function of the form f(x) = a * x^2 + b * x + c to each element x of the array such that the resulting array is still sorted. Implement this in Java or C++. The input are the initial sorted array and the function parameters (a, b and c).
Do you think we can do it in-place with less than O(n log(n)) time where n is the array size (e.g. apply a function to each element of an array, after that sort the array)?
I think this can be done in linear time. Because the function is quadratic it will form a parabola, ie the values decrease (assuming a positive value for 'a') down to some minimum point and then after that will increase. So the algorithm should iterate over the sorted values until we reach/pass the minimum point of the function (which can be determined by a simple differentiation) and then for each value after the minimum it should just walk backward through the earlier values looking for the correct place to insert that value. Using a linked list would allow items to be moved around in-place.
The quadratic transform can cause part of the values to "fold" over the others. You will have to reverse their order, which can easily be done in-place, but then you will need to merge the two sequences.
In-place merge in linear time is possible, but this is a difficult process, normally out of the scope of an interview question (unless for a Teacher's position in Algorithmics).
Have a look at this solution: http://www.akira.ruc.dk/~keld/teaching/algoritmedesign_f04/Artikler/04/Huang88.pdf
I guess that the main idea is to reserve a part of the array where you allow swaps that scramble the data it contains. You use it to perform partial merges on the rest of the array and in the end you sort back the data. (The merging buffer must be small enough that it doesn't take more than O(N) to sort it.)
If a is > 0, then a minimum occurs at x = -b/(2a), and values will be copied to the output array in forward order from [0] to [n-1]. If a < 0, then a maximum occurs at x = -b/(2a) and values will be copied to the output array in reverse order from [n-1] to [0]. (If a == 0, then if b > 0, do a forward copy, if b < 0, do a reverse copy, If a == b == 0, nothing needs to be done). I think the sorted array can be binary searched for the closest value to -b/(2a) in O(log2(n)) (otherwise it's O(n)). Then this value is copied to the output array and the values before (decrementing index or pointer) and after (incrementing index or pointer) are merged into the output array, taking O(n) time.
static void sortArray(int arr[], int n, int A, int B, int C)
{
// Apply equation on all elements
for (int i = 0; i < n; i++)
arr[i] = A*arr[i]*arr[i] + B*arr[i] + C;
// Find maximum element in resultant array
int index=-1;
int maximum = -999999;
for (int i = 0; i< n; i++)
{
if (maximum < arr[i])
{
index = i;
maximum = arr[i];
}
}
// Use maximum element as a break point
// and merge both subarrays usin simple
// merge function of merge sort
int i = 0, j = n-1;
int[] new_arr = new int[n];
int k = 0;
while (i < index && j > index)
{
if (arr[i] < arr[j])
new_arr[k++] = arr[i++];
else
new_arr[k++] = arr[j--];
}
// Merge remaining elements
while (i < index)
new_arr[k++] = arr[i++];
while (j > index)
new_arr[k++] = arr[j--];
new_arr[n-1] = maximum;
// Modify original array
for (int p = 0; p < n ; p++)
arr[p] = new_arr[p];
}

Given 2 sorted arrays of integers, find the nth largest number in sublinear time [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How to find the kth smallest element in the union of two sorted arrays?
This is a question one of my friends told me he was asked while interviewing, I've been thinking about a solution.
Sublinear time implies logarithmic to me, so perhaps some kind of divide and conquer method. For simplicity, let's say both arrays are the same size and that all elements are unique
I think this is two concurrent binary searches on the subarrays A[0..n-1] and B[0..n-1], which is O(log n).
Given sorted arrays, you know that the nth largest will appear somewhere before or at A[n-1] if it is in array A, or B[n-1] if it is in array B
Consider item at index a in A and item at index b in B.
Perform binary search as follows (pretty rough pseudocode, not taking in account 'one-off' problems):
If a + b > n, then reduce the search set
if A[a] > B[b] then b = b / 2, else a = a / 2
If a + b < n, then increase the search set
if A[a] > B[b] then b = 3/2 * b, else a = 3/2 * a (halfway between a and previous a)
If a + b = n then the nth largest is max(A[a], B[b])
I believe worst case O(ln n), but in any case definitely sublinear.
I believe that you can solve this problem using a variant on binary search. The intuition behind this algorithm is as follows. Let the two arrays be A and B and let's assume for the sake of simplicity that they're the same size (this isn't necessary, as you'll see). For each array, we can construct parallel arrays Ac and Bc such that for each index i, Ac[i] is the number of elements in the two arrays that are no larger than A[i] and Bc[i] is the number of elements in the two arrays that are no larger than B[i]. If we could construct these arrays efficiently, then we could find the kth smallest element efficiently by doing binary searches on both Ac and Bc to find the value k. The corresponding entry of A or B for that entry is then the kth largest element. The binary search is valid because the two arrays Ac and Bc are sorted, which I think you can convince yourself of pretty easily.
Of course, this solution doesn't work in sublinear time because it takes O(n) time to construct the arrays Ac and Bc. The question then is - is there some way that we can implicitly construct these arrays? That is, can we determine the values in these arrays without necessarily constructing each element? I think that the answer is yes, using this algorithm. Let's begin by searching array A to see if it has the kth smallest value. We know for a fact that the kth smallest value can't appear in the array in array A after position k (assuming all the elements are distinct). So let's focus just on the the first k elements of array A. We'll do a binary search over these values as follows. Start at position k/2; this is the k/2th smallest element in array A. Now do a binary search in array B to find the largest value in B smaller than this value and look at its position in the array; this is the number of elements in B smaller than the current value. If we add up the position of the elements in A and B, we have the total number of elements in the two arrays smaller than the current element. If this is exactly k, we're done. If this is less than k, then we recurse in the upper half of the first k elements of A, and if this is greater than k we recurse in the lower half of the first elements of k, etc. Eventually, we'll either find that the kth largest element is in array A, in which case we're done. Otherwise, repeat this process on array B.
The runtime for this algorithm is as follows. The search of array A does a binary search over k elements, which takes O(lg k) iterations. Each iteration costs O(lg n), since we have to do a binary search in B. This means that the total time for this search is O(lg k lg n). The time to do this in array B is the same, so the net runtime for the algorithm is O(lg k lg n) = O(lg2 n) = o(n), which is sublinear.
This is quite similar answer to Kirk's.
Let Find( nth, A, B ) be function that returns nth number, and |A| + |B| >= n. This is simple pseudo code without checking if one of array is small, less than 3 elements. In case of small array one or 2 binary searches in larger array is enough to find needed element.
Find( nth, A, B )
If A.last() <= B.first():
return B[nth - A.size()]
If B.last() <= A.first():
return A[nth - B.size()]
Let a and b indexes of middle elements of A and B
Assume that A[a] <= B[b] (if not swap arrays)
if nth <= a + b:
return Find( nth, A, B.first_half(b) )
return Find( nth - a, A.second_half(a), B )
It is log(|A|) + log(|B|), and because input arrays can be made to have n elements each it is log(n) complexity.
int[] a = new int[] { 11, 9, 7, 5, 3 };
int[] b = new int[] { 12, 10, 8, 6, 4 };
int n = 7;
int result = 0;
if (n > (a.Length + b.Length))
throw new Exception("n is greater than a.Length + b.Length");
else if (n < (a.Length + b.Length) / 2)
{
int ai = 0;
int bi = 0;
for (int i = n; i > 0; i--)
{
// find the highest from a or b
if (ai < a.Length)
{
if (bi < b.Length)
{
if (a[ai] > b[bi])
{
result = a[ai];
ai++;
}
else
{
result = b[bi];
bi++;
}
}
else
{
result = a[ai];
ai++;
}
}
else
{
if (bi < b.Length)
{
result = b[bi];
bi++;
}
else
{
// error, n is greater than a.Length + b.Length
}
}
}
}
else
{
// go in reverse
int ai = a.Length - 1;
int bi = b.Length - 1;
for (int i = a.Length + b.Length - n; i >= 0; i--)
{
// find the lowset from a or b
if (ai >= 0)
{
if (bi >= 0)
{
if (a[ai] < b[bi])
{
result = a[ai];
ai--;
}
else
{
result = b[bi];
bi--;
}
}
else
{
result = a[ai];
ai--;
}
}
else
{
if (bi >= 0)
{
result = b[bi];
bi--;
}
else
{
// error, n is greater than a.Length + b.Length
}
}
}
}
Console.WriteLine("{0} th highest = {1}", n, result);
Sublinear of what though? You can't have an algorithm that doesn't check at least n elements, even verifying a solution would require checking that many. But the size of the problem here should surely mean the size of the arrays, so an algorithm that only checks n elements is sublinear.
So I think there's no trick here, start with the list with the smaller starting element and advance until you either:
Reach the nth element, and you're done.
Find the next element is bigger than the next element in the other list, at which point you switch to the other list.
Run out of elements and switch.

Finding the maximum subsequence binary sets that have an equal number of 1s and 0s

I found the following problem on the internet, and would like to know how I would go about solving it:
You are given an array ' containing 0s and 1s. Find O(n) time and O(1) space algorithm to find the maximum sub sequence which has equal number of 1s and 0s.
Examples:
10101010 -
The longest sub sequence that satisfies the problem is the input itself
1101000 -
The longest sub sequence that satisfies the problem is 110100
Update.
I have to completely rephrase my answer. (If you had upvoted the earlier version, well, you were tricked!)
Lets sum up the easy case again, to get it out of the way:
Find the longest prefix of the bit-string containing
an equal number of 1s and 0s of the
array.
This is trivial: A simple counter is needed, counting how many more 1s we have than 0s, and iterating the bitstring while maintaining this. The position where this counter becomes zero for the last time is the end of the longest sought prefix. O(N) time, O(1) space. (I'm completely convinced by now that this is what the original problem asked for. )
Now lets switch to the more difficult version of the problem: we no longer require subsequences to be prefixes - they can start anywhere.
After some back and forth thought, I thought there might be no linear algorithm for this. For example, consider the prefix "111111111111111111...". Every single 1 of those may be the start of the longest subsequence, there is no candidate subsequence start position that dominates (i.e. always gives better solutions than) any other position, so we can't throw away any of them (O(N) space) and at any step, we must be able to select the best start (which has an equal number of 1s and 0s to the current position) out of linearly many candidates, in O(1) time. It turns out this is doable, and easily doable too, since we can select the candidate based on the running sum of 1s (+1) and 0s (-1), this has at most size N, and we can store the first position we reach each sum in 2N cells - see pmod's answer below (yellowfog's comments and geometric insight too).
Failing to spot this trick, I had replaced a fast but wrong with a slow but sure algorithm, (since correct algorithms are preferable to wrong ones!):
Build an array A with the accumulated number of 1s from the start to that position, e.g. if the bitstring is "001001001", then the array would be [0, 0, 1, 1, 1, 2, 2, 2, 3]. Using this, we can test in O(1) whether the subsequence (i,j), inclusive, is valid: isValid(i, j) = (j - i + 1 == 2 * (A[j] - A[i - 1]), i.e. it is valid if its length is double the amount of 1s in it. For example, the subsequence (3,6) is valid because 6 - 3 + 1 == 2 * A[6] - A[2] = 4.
Plain old double loop:
maxSubsLength = 0
for i = 1 to N - 1
for j = i + 1 to N
if isValid(i, j) ... #maintain maxSubsLength
end
end
This can be sped up a bit using some branch-and-bound by skipping i/j sequences which are shorter than the current maxSubsLength, but asymptotically this is still O(n^2). Slow, but with a big plus on its side: correct!
Strictly speaking, the answer is that no such algorithm exists because the language of strings consisting of an equal number of zeros and ones is not regular.
Of course everyone ignores that fact that storing an integer of magnitude n is O(log n) in space and treats it as O(1) in space. :-) Pretty much all big-O's, including time ones, are full of (or rather empty of) missing log n factors, or equivalently, they assume n is bounded by the size of a machine word, which means you're really looking at a finite problem and everything is O(1).
New solution:
Suppose we have for n-bit input bit-array 2*n-size array to keep position of bit. So, the size of array element must have enough size to keep maximum position number. For 256 input bit array, it's needed 256x2 array of bytes (byte is enough to keep 255 - the maximum position).
Moving from the first position of bit-array we put the position into array starting from the middle of array (index is n) using a rule:
1. Increment the position if we passed "1" bit and decrement when passed "0" bit
2. When meet already initialized array element - don't change it and remember the difference between positions (current minus taken from array element) - this is a size of local maximum sequence.
3. Every time we meet local maximum compare it with the global maximum and update if the latter is less.
For example: bit sequence is 0,0,0,1,0,1
initial array index is n
set arr[n] = 0 (position)
bit 0 -> index--
set arr[n-1] = 1
bit 0 -> index--
set arr[n-2] = 2
bit 0 -> index--
set arr[n-3] = 3
bit 1 -> index++
arr[n-2] already contains 2 -> thus, local max seq is [3,2] becomes abs. maximum
will not overwrite arr[n-2]
bit 0 -> index--
arr[n-3] already contains 3 -> thus, local max seq is [4,3] is not abs. maximum
bit 1 -> index++
arr[n-2] already contains 2 -> thus, local max seq is [5,2] is abs. max
Thus, we passing through the whole bit array only once.
Does this solves the task?
input:
n - number of bits
a[n] - input bit-array
track_pos[2*n] = {0,};
ind = n;
/* start from position 1 since zero has
meaning track_pos[x] is not initialized */
for (i = 1; i < n+1; i++) {
if (track_pos[ind]) {
seq_size = i - track_pos[ind];
if (glob_seq_size < seq_size) {
/* store as interm. result */
glob_seq_size = seq_size;
glob_pos_from = track_pos[ind];
glob_pos_to = i;
}
} else {
track_pos[ind] = i;
}
if (a[i-1])
ind++;
else
ind--;
}
output:
glob_seq_size - length of maximum sequence
glob_pos_from - start position of max sequence
glob_pos_to - end position of max sequence
In this thread ( http://discuss.techinterview.org/default.asp?interview.11.792102.31 ), poster A.F. has given an algorithm that runs in O(n) time and uses O(sqrt(n log n)) bits.
brute force: start with maximum length of the array to count the o's and l's. if o eqals l, you are finished. else reduce search length by 1 and do the algorithm for all subsequences of the reduced length (that is maximium length minus reduced length) and so on. stop when the subtraction is 0.
As was pointed out by user "R..", there is no solution, strictly speaking, unless you ignore the "log n" space complexity. In the following, I will consider that the array length fits in a machine register (e.g. a 64-bit word) and that a machine register has size O(1).
The important point to notice is that if there are more 1's than 0's, then the maximum subsequence that you are looking for necessarily includes all the 0's, and that many 1's. So here the algorithm:
Notations: the array has length n, indices are counted from 0 to n-1.
First pass: count the number of 1's (c1) and 0's (c0). If c1 = c0 then your maximal subsequence is the entire array (end of algorithm). Otherwise, let d be the digit which appears the less often (d = 0 if c0 < c1, otherwise d = 1).
Compute m = min(c0, c1) * 2. This is the size of the subsequence you are looking for.
Second pass: scan the array to find the index j of the first occurrence of d.
Compute k = max(j, n - m). The subsequence starts at index k and has length m.
Note that there could be several solutions (several subsequences of maximal length which match the criterion).
In plain words: assuming that there are more 1's than 0's, then I consider the smallest subsequence which contains all the 0's. By definition, that subsequence is surrounded by bunches of 1's. So I just grab enough 1's from the sides.
Edit: as was pointed out, this does not work... The "important point" is actually wrong.
Try something like this:
/* bit(n) is a macro that returns the nth bit, 0 or 1. len is number of bits */
int c[2] = {0,0};
int d, i, a, b, p;
for(i=0; i<len; i++) c[bit(i)]++;
d = c[1] < c[0];
if (c[d] == 0) return; /* all bits identical; fail */
for(i=0; bit(i)!=d; i++);
a = b = i;
for(p=0; i<len; i++) {
p += 2*bit(i)-1;
if (!p) b = i;
}
if (a == b) { /* account for case where we need bits before the first d */
b = len - 1;
a -= abs(p);
}
printf("maximal subsequence consists of bits %d through %d\n", a, b);
Completely untested but modulo stupid mistakes it should work. Based on my reply to Thomas's answer which failed in certain cases.
New Solution:
Space complexity of O(1) and time complexity O(n^2)
int iStart = 0, iEnd = 0;
int[] arrInput = { 1, 0, 1, 1, 1,0,0,1,0,1,0,0 };
for (int i = 0; i < arrInput.Length; i++)
{
int iCurrEndIndex = i;
int iSum = 0;
for (int j = i; j < arrInput.Length; j++)
{
iSum = (arrInput[j] == 1) ? iSum+1 : iSum-1;
if (iSum == 0)
{
iCurrEndIndex = j;
}
}
if ((iEnd - iStart) < (iCurrEndIndex - i))
{
iEnd = iCurrEndIndex;
iStart = i;
}
}
I am not sure whether the array you are referring is int array of 0's and 1's or bitarray??
If its about bitarray, here is my approach:
int isEvenBitCount(int n)
{
//n ... //Decimal equivalent of the input binary sequence
int cnt1 = 0, cnt0 = 0;
while(n){
if(n&0x01) { printf("1 "); cnt1++;}
else { printf("0 "); cnt0++; }
n = n>>1;
}
printf("\n");
return cnt0 == cnt1;
}
int main()
{
int i = 40, j = 25, k = 35;
isEvenBitCount(i)?printf("-->Yes\n"):printf("-->No\n");
isEvenBitCount(j)?printf("-->Yes\n"):printf("-->No\n");
isEvenBitCount(k)?printf("-->Yes\n"):printf("-->No\n");
}
with use of bitwise operations the time complexity is almost O(1) also.

Find the Smallest Integer Not in a List

An interesting interview question that a colleague of mine uses:
Suppose that you are given a very long, unsorted list of unsigned 64-bit integers. How would you find the smallest non-negative integer that does not occur in the list?
FOLLOW-UP: Now that the obvious solution by sorting has been proposed, can you do it faster than O(n log n)?
FOLLOW-UP: Your algorithm has to run on a computer with, say, 1GB of memory
CLARIFICATION: The list is in RAM, though it might consume a large amount of it. You are given the size of the list, say N, in advance.
If the datastructure can be mutated in place and supports random access then you can do it in O(N) time and O(1) additional space. Just go through the array sequentially and for every index write the value at the index to the index specified by value, recursively placing any value at that location to its place and throwing away values > N. Then go again through the array looking for the spot where value doesn't match the index - that's the smallest value not in the array. This results in at most 3N comparisons and only uses a few values worth of temporary space.
# Pass 1, move every value to the position of its value
for cursor in range(N):
target = array[cursor]
while target < N and target != array[target]:
new_target = array[target]
array[target] = target
target = new_target
# Pass 2, find first location where the index doesn't match the value
for cursor in range(N):
if array[cursor] != cursor:
return cursor
return N
Here's a simple O(N) solution that uses O(N) space. I'm assuming that we are restricting the input list to non-negative numbers and that we want to find the first non-negative number that is not in the list.
Find the length of the list; lets say it is N.
Allocate an array of N booleans, initialized to all false.
For each number X in the list, if X is less than N, set the X'th element of the array to true.
Scan the array starting from index 0, looking for the first element that is false. If you find the first false at index I, then I is the answer. Otherwise (i.e. when all elements are true) the answer is N.
In practice, the "array of N booleans" would probably be encoded as a "bitmap" or "bitset" represented as a byte or int array. This typically uses less space (depending on the programming language) and allows the scan for the first false to be done more quickly.
This is how / why the algorithm works.
Suppose that the N numbers in the list are not distinct, or that one or more of them is greater than N. This means that there must be at least one number in the range 0 .. N - 1 that is not in the list. So the problem of find the smallest missing number must therefore reduce to the problem of finding the smallest missing number less than N. This means that we don't need to keep track of numbers that are greater or equal to N ... because they won't be the answer.
The alternative to the previous paragraph is that the list is a permutation of the numbers from 0 .. N - 1. In this case, step 3 sets all elements of the array to true, and step 4 tells us that the first "missing" number is N.
The computational complexity of the algorithm is O(N) with a relatively small constant of proportionality. It makes two linear passes through the list, or just one pass if the list length is known to start with. There is no need to represent the hold the entire list in memory, so the algorithm's asymptotic memory usage is just what is needed to represent the array of booleans; i.e. O(N) bits.
(By contrast, algorithms that rely on in-memory sorting or partitioning assume that you can represent the entire list in memory. In the form the question was asked, this would require O(N) 64-bit words.)
#Jorn comments that steps 1 through 3 are a variation on counting sort. In a sense he is right, but the differences are significant:
A counting sort requires an array of (at least) Xmax - Xmin counters where Xmax is the largest number in the list and Xmin is the smallest number in the list. Each counter has to be able to represent N states; i.e. assuming a binary representation it has to have an integer type (at least) ceiling(log2(N)) bits.
To determine the array size, a counting sort needs to make an initial pass through the list to determine Xmax and Xmin.
The minimum worst-case space requirement is therefore ceiling(log2(N)) * (Xmax - Xmin) bits.
By contrast, the algorithm presented above simply requires N bits in the worst and best cases.
However, this analysis leads to the intuition that if the algorithm made an initial pass through the list looking for a zero (and counting the list elements if required), it would give a quicker answer using no space at all if it found the zero. It is definitely worth doing this if there is a high probability of finding at least one zero in the list. And this extra pass doesn't change the overall complexity.
EDIT: I've changed the description of the algorithm to use "array of booleans" since people apparently found my original description using bits and bitmaps to be confusing.
Since the OP has now specified that the original list is held in RAM and that the computer has only, say, 1GB of memory, I'm going to go out on a limb and predict that the answer is zero.
1GB of RAM means the list can have at most 134,217,728 numbers in it. But there are 264 = 18,446,744,073,709,551,616 possible numbers. So the probability that zero is in the list is 1 in 137,438,953,472.
In contrast, my odds of being struck by lightning this year are 1 in 700,000. And my odds of getting hit by a meteorite are about 1 in 10 trillion. So I'm about ten times more likely to be written up in a scientific journal due to my untimely death by a celestial object than the answer not being zero.
As pointed out in other answers you can do a sort, and then simply scan up until you find a gap.
You can improve the algorithmic complexity to O(N) and keep O(N) space by using a modified QuickSort where you eliminate partitions which are not potential candidates for containing the gap.
On the first partition phase, remove duplicates.
Once the partitioning is complete look at the number of items in the lower partition
Is this value equal to the value used for creating the partition?
If so then it implies that the gap is in the higher partition.
Continue with the quicksort, ignoring the lower partition
Otherwise the gap is in the lower partition
Continue with the quicksort, ignoring the higher partition
This saves a large number of computations.
To illustrate one of the pitfalls of O(N) thinking, here is an O(N) algorithm that uses O(1) space.
for i in [0..2^64):
if i not in list: return i
print "no 64-bit integers are missing"
Since the numbers are all 64 bits long, we can use radix sort on them, which is O(n). Sort 'em, then scan 'em until you find what you're looking for.
if the smallest number is zero, scan forward until you find a gap. If the smallest number is not zero, the answer is zero.
For a space efficient method and all values are distinct you can do it in space O( k ) and time O( k*log(N)*N ). It's space efficient and there's no data moving and all operations are elementary (adding subtracting).
set U = N; L=0
First partition the number space in k regions. Like this:
0->(1/k)*(U-L) + L, 0->(2/k)*(U-L) + L, 0->(3/k)*(U-L) + L ... 0->(U-L) + L
Find how many numbers (count{i}) are in each region. (N*k steps)
Find the first region (h) that isn't full. That means count{h} < upper_limit{h}. (k steps)
if h - count{h-1} = 1 you've got your answer
set U = count{h}; L = count{h-1}
goto 2
this can be improved using hashing (thanks for Nic this idea).
same
First partition the number space in k regions. Like this:
L + (i/k)->L + (i+1/k)*(U-L)
inc count{j} using j = (number - L)/k (if L < number < U)
find first region (h) that doesn't have k elements in it
if count{h} = 1 h is your answer
set U = maximum value in region h L = minimum value in region h
This will run in O(log(N)*N).
I'd just sort them then run through the sequence until I find a gap (including the gap at the start between zero and the first number).
In terms of an algorithm, something like this would do it:
def smallest_not_in_list(list):
sort(list)
if list[0] != 0:
return 0
for i = 1 to list.last:
if list[i] != list[i-1] + 1:
return list[i-1] + 1
if list[list.last] == 2^64 - 1:
assert ("No gaps")
return list[list.last] + 1
Of course, if you have a lot more memory than CPU grunt, you could create a bitmask of all possible 64-bit values and just set the bits for every number in the list. Then look for the first 0-bit in that bitmask. That turns it into an O(n) operation in terms of time but pretty damned expensive in terms of memory requirements :-)
I doubt you could improve on O(n) since I can't see a way of doing it that doesn't involve looking at each number at least once.
The algorithm for that one would be along the lines of:
def smallest_not_in_list(list):
bitmask = mask_make(2^64) // might take a while :-)
mask_clear_all (bitmask)
for i = 1 to list.last:
mask_set (bitmask, list[i])
for i = 0 to 2^64 - 1:
if mask_is_clear (bitmask, i):
return i
assert ("No gaps")
Sort the list, look at the first and second elements, and start going up until there is a gap.
We could use a hash table to hold the numbers. Once all numbers are done, run a counter from 0 till we find the lowest. A reasonably good hash will hash and store in constant time, and retrieves in constant time.
for every i in X // One scan Θ(1)
hashtable.put(i, i); // O(1)
low = 0;
while (hashtable.get(i) <> null) // at most n+1 times
low++;
print low;
The worst case if there are n elements in the array, and are {0, 1, ... n-1}, in which case, the answer will be obtained at n, still keeping it O(n).
You can do it in O(n) time and O(1) additional space, although the hidden factor is quite large. This isn't a practical way to solve the problem, but it might be interesting nonetheless.
For every unsigned 64-bit integer (in ascending order) iterate over the list until you find the target integer or you reach the end of the list. If you reach the end of the list, the target integer is the smallest integer not in the list. If you reach the end of the 64-bit integers, every 64-bit integer is in the list.
Here it is as a Python function:
def smallest_missing_uint64(source_list):
the_answer = None
target = 0L
while target < 2L**64:
target_found = False
for item in source_list:
if item == target:
target_found = True
if not target_found and the_answer is None:
the_answer = target
target += 1L
return the_answer
This function is deliberately inefficient to keep it O(n). Note especially that the function keeps checking target integers even after the answer has been found. If the function returned as soon as the answer was found, the number of times the outer loop ran would be bound by the size of the answer, which is bound by n. That change would make the run time O(n^2), even though it would be a lot faster.
Thanks to egon, swilden, and Stephen C for my inspiration. First, we know the bounds of the goal value because it cannot be greater than the size of the list. Also, a 1GB list could contain at most 134217728 (128 * 2^20) 64-bit integers.
Hashing part
I propose using hashing to dramatically reduce our search space. First, square root the size of the list. For a 1GB list, that's N=11,586. Set up an integer array of size N. Iterate through the list, and take the square root* of each number you find as your hash. In your hash table, increment the counter for that hash. Next, iterate through your hash table. The first bucket you find that is not equal to it's max size defines your new search space.
Bitmap part
Now set up a regular bit map equal to the size of your new search space, and again iterate through the source list, filling out the bitmap as you find each number in your search space. When you're done, the first unset bit in your bitmap will give you your answer.
This will be completed in O(n) time and O(sqrt(n)) space.
(*You could use use something like bit shifting to do this a lot more efficiently, and just vary the number and size of buckets accordingly.)
Well if there is only one missing number in a list of numbers, the easiest way to find the missing number is to sum the series and subtract each value in the list. The final value is the missing number.
int i = 0;
while ( i < Array.Length)
{
if (Array[i] == i + 1)
{
i++;
}
if (i < Array.Length)
{
if (Array[i] <= Array.Length)
{//SWap
int temp = Array[i];
int AnoTemp = Array[temp - 1];
Array[temp - 1] = temp;
Array[i] = AnoTemp;
}
else
i++;
}
}
for (int j = 0; j < Array.Length; j++)
{
if (Array[j] > Array.Length)
{
Console.WriteLine(j + 1);
j = Array.Length;
}
else
if (j == Array.Length - 1)
Console.WriteLine("Not Found !!");
}
}
Here's my answer written in Java:
Basic Idea:
1- Loop through the array throwing away duplicate positive, zeros, and negative numbers while summing up the rest, getting the maximum positive number as well, and keep the unique positive numbers in a Map.
2- Compute the sum as max * (max+1)/2.
3- Find the difference between the sums calculated at steps 1 & 2
4- Loop again from 1 to the minimum of [sums difference, max] and return the first number that is not in the map populated in step 1.
public static int solution(int[] A) {
if (A == null || A.length == 0) {
throw new IllegalArgumentException();
}
int sum = 0;
Map<Integer, Boolean> uniqueNumbers = new HashMap<Integer, Boolean>();
int max = A[0];
for (int i = 0; i < A.length; i++) {
if(A[i] < 0) {
continue;
}
if(uniqueNumbers.get(A[i]) != null) {
continue;
}
if (A[i] > max) {
max = A[i];
}
uniqueNumbers.put(A[i], true);
sum += A[i];
}
int completeSum = (max * (max + 1)) / 2;
for(int j = 1; j <= Math.min((completeSum - sum), max); j++) {
if(uniqueNumbers.get(j) == null) { //O(1)
return j;
}
}
//All negative case
if(uniqueNumbers.isEmpty()) {
return 1;
}
return 0;
}
As Stephen C smartly pointed out, the answer must be a number smaller than the length of the array. I would then find the answer by binary search. This optimizes the worst case (so the interviewer can't catch you in a 'what if' pathological scenario). In an interview, do point out you are doing this to optimize for the worst case.
The way to use binary search is to subtract the number you are looking for from each element of the array, and check for negative results.
I like the "guess zero" apprach. If the numbers were random, zero is highly probable. If the "examiner" set a non-random list, then add one and guess again:
LowNum=0
i=0
do forever {
if i == N then leave /* Processed entire array */
if array[i] == LowNum {
LowNum++
i=0
}
else {
i++
}
}
display LowNum
The worst case is n*N with n=N, but in practice n is highly likely to be a small number (eg. 1)
I am not sure if I got the question. But if for list 1,2,3,5,6 and the missing number is 4, then the missing number can be found in O(n) by:
(n+2)(n+1)/2-(n+1)n/2
EDIT: sorry, I guess I was thinking too fast last night. Anyway, The second part should actually be replaced by sum(list), which is where O(n) comes. The formula reveals the idea behind it: for n sequential integers, the sum should be (n+1)*n/2. If there is a missing number, the sum would be equal to the sum of (n+1) sequential integers minus the missing number.
Thanks for pointing out the fact that I was putting some middle pieces in my mind.
Well done Ants Aasma! I thought about the answer for about 15 minutes and independently came up with an answer in a similar vein of thinking to yours:
#define SWAP(x,y) { numerictype_t tmp = x; x = y; y = tmp; }
int minNonNegativeNotInArr (numerictype_t * a, size_t n) {
int m = n;
for (int i = 0; i < m;) {
if (a[i] >= m || a[i] < i || a[i] == a[a[i]]) {
m--;
SWAP (a[i], a[m]);
continue;
}
if (a[i] > i) {
SWAP (a[i], a[a[i]]);
continue;
}
i++;
}
return m;
}
m represents "the current maximum possible output given what I know about the first i inputs and assuming nothing else about the values until the entry at m-1".
This value of m will be returned only if (a[i], ..., a[m-1]) is a permutation of the values (i, ..., m-1). Thus if a[i] >= m or if a[i] < i or if a[i] == a[a[i]] we know that m is the wrong output and must be at least one element lower. So decrementing m and swapping a[i] with the a[m] we can recurse.
If this is not true but a[i] > i then knowing that a[i] != a[a[i]] we know that swapping a[i] with a[a[i]] will increase the number of elements in their own place.
Otherwise a[i] must be equal to i in which case we can increment i knowing that all the values of up to and including this index are equal to their index.
The proof that this cannot enter an infinite loop is left as an exercise to the reader. :)
The Dafny fragment from Ants' answer shows why the in-place algorithm may fail. The requires pre-condition describes that the values of each item must not go beyond the bounds of the array.
method AntsAasma(A: array<int>) returns (M: int)
requires A != null && forall N :: 0 <= N < A.Length ==> 0 <= A[N] < A.Length;
modifies A;
{
// Pass 1, move every value to the position of its value
var N := A.Length;
var cursor := 0;
while (cursor < N)
{
var target := A[cursor];
while (0 <= target < N && target != A[target])
{
var new_target := A[target];
A[target] := target;
target := new_target;
}
cursor := cursor + 1;
}
// Pass 2, find first location where the index doesn't match the value
cursor := 0;
while (cursor < N)
{
if (A[cursor] != cursor)
{
return cursor;
}
cursor := cursor + 1;
}
return N;
}
Paste the code into the validator with and without the forall ... clause to see the verification error. The second error is a result of the verifier not being able to establish a termination condition for the Pass 1 loop. Proving this is left to someone who understands the tool better.
Here's an answer in Java that does not modify the input and uses O(N) time and N bits plus a small constant overhead of memory (where N is the size of the list):
int smallestMissingValue(List<Integer> values) {
BitSet bitset = new BitSet(values.size() + 1);
for (int i : values) {
if (i >= 0 && i <= values.size()) {
bitset.set(i);
}
}
return bitset.nextClearBit(0);
}
def solution(A):
index = 0
target = []
A = [x for x in A if x >=0]
if len(A) ==0:
return 1
maxi = max(A)
if maxi <= len(A):
maxi = len(A)
target = ['X' for x in range(maxi+1)]
for number in A:
target[number]= number
count = 1
while count < maxi+1:
if target[count] == 'X':
return count
count +=1
return target[count-1] + 1
Got 100% for the above solution.
1)Filter negative and Zero
2)Sort/distinct
3)Visit array
Complexity: O(N) or O(N * log(N))
using Java8
public int solution(int[] A) {
int result = 1;
boolean found = false;
A = Arrays.stream(A).filter(x -> x > 0).sorted().distinct().toArray();
//System.out.println(Arrays.toString(A));
for (int i = 0; i < A.length; i++) {
result = i + 1;
if (result != A[i]) {
found = true;
break;
}
}
if (!found && result == A.length) {
//result is larger than max element in array
result++;
}
return result;
}
An unordered_set can be used to store all the positive numbers, and then we can iterate from 1 to length of unordered_set, and see the first number that does not occur.
int firstMissingPositive(vector<int>& nums) {
unordered_set<int> fre;
// storing each positive number in a hash.
for(int i = 0; i < nums.size(); i +=1)
{
if(nums[i] > 0)
fre.insert(nums[i]);
}
int i = 1;
// Iterating from 1 to size of the set and checking
// for the occurrence of 'i'
for(auto it = fre.begin(); it != fre.end(); ++it)
{
if(fre.find(i) == fre.end())
return i;
i +=1;
}
return i;
}
Solution through basic javascript
var a = [1, 3, 6, 4, 1, 2];
function findSmallest(a) {
var m = 0;
for(i=1;i<=a.length;i++) {
j=0;m=1;
while(j < a.length) {
if(i === a[j]) {
m++;
}
j++;
}
if(m === 1) {
return i;
}
}
}
console.log(findSmallest(a))
Hope this helps for someone.
With python it is not the most efficient, but correct
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import datetime
# write your code in Python 3.6
def solution(A):
MIN = 0
MAX = 1000000
possible_results = range(MIN, MAX)
for i in possible_results:
next_value = (i + 1)
if next_value not in A:
return next_value
return 1
test_case_0 = [2, 2, 2]
test_case_1 = [1, 3, 44, 55, 6, 0, 3, 8]
test_case_2 = [-1, -22]
test_case_3 = [x for x in range(-10000, 10000)]
test_case_4 = [x for x in range(0, 100)] + [x for x in range(102, 200)]
test_case_5 = [4, 5, 6]
print("---")
a = datetime.datetime.now()
print(solution(test_case_0))
print(solution(test_case_1))
print(solution(test_case_2))
print(solution(test_case_3))
print(solution(test_case_4))
print(solution(test_case_5))
def solution(A):
A.sort()
j = 1
for i, elem in enumerate(A):
if j < elem:
break
elif j == elem:
j += 1
continue
else:
continue
return j
this can help:
0- A is [5, 3, 2, 7];
1- Define B With Length = A.Length; (O(1))
2- initialize B Cells With 1; (O(n))
3- For Each Item In A:
if (B.Length <= item) then B[Item] = -1 (O(n))
4- The answer is smallest index in B such that B[index] != -1 (O(n))

Resources