Convert Each Element Of Array with max element [duplicate] - arrays

Given an array A with n
integers. In one turn one can apply the
following operation to any consecutive
subarray A[l..r] : assign to all A i (l <= i <= r)
median of subarray A[l..r] .
Let max be the maximum integer of A .
We want to know the minimum
number of operations needed to change A
to an array of n integers each with value
max.
For example, let A = [1, 2, 3] . We want to change it to [3, 3, 3] . We
can do this in two operations, first for
subarray A[2..3] (after that A equals to [1,
3, 3] ), then operation to A[1..3] .
Also,median is defined for some array A as follows. Let B be the same
array A , but sorted in non-decreasing
order. Median of A is B m (1-based
indexing), where m equals to (n div 2)+1 .
Here 'div' is an integer division operation.
So, for a sorted array with 5 elements,
median is the 3rd element and for a sorted
array with 6 elements, it is the 4th element.
Since the maximum value of N is 30.I thought of brute forcing the result
could there be a better solution.

You can double the size of the subarray containing the maximum element in each iteration. After the first iteration, there is a subarray of size 2 containing the maximum. Then apply your operation to a subarray of size 4, containing those 2 elements, giving you a subarray of size 4 containing the maximum. Then apply to a size 8 subarray and so on. You fill the array in log2(N) operations, which is optimal. If N is 30, five operations is enough.
This is optimal in the worst case (i.e. when only one element is the maximum), since it sets the highest possible number of elements in each iteration.
Update 1: I noticed I messed up the 4s and 8s a bit. Corrected.
Update 2: here's an example. Array size 10, start state:
[6 1 5 9 3 2 0 7 4 8]
To get two nines, run op on subarray of size two containing the nine. For instance A[4…5] gets you:
[6 1 5 9 9 2 0 7 4 8]
Now run on size four subarray that contains 4…5, for instance on A[2…5] to get:
[6 9 9 9 9 2 0 7 4 8]
Now on subarray of size 8, for instance A[1…8], get:
[9 9 9 9 9 9 9 9 4 8]
Doubling now would get us 16 nines, but we have only 10 positions, so round of with A[1…10], get:
[9 9 9 9 9 9 9 9 9 9]
Update 3: since this is only optimal in the worst case, it is actually not an answer to the original question, which asks for a way of finding the minimal number of operations for all inputs. I misinterpreted the sentence about brute forcing to be about brute forcing with the median operations, rather than in finding the minimum sequence of operations.

This is the problem from codechef Long Contest.Since the contest is already over,so awkwardiom ,i am pasting the problem setter approach (Source : CC Contest Editorial Page).
"Any state of the array can be represented as a binary mask with each bit 1 means that corresponding number is equal to the max and 0 otherwise. You can run DP with state R[mask] and O(n) transits. You can proof (or just believe) that the number of statest will be not big, of course if you run good DP. The state of our DP will be the mask of numbers that are equal to max. Of course, it makes sense to use operation only for such subarray [l; r] that number of 1-bits is at least as much as number of 0-bits in submask [l; r], because otherwise nothing will change. Also you should notice that if the left bound of your operation is l it is good to make operation only with the maximal possible r (this gives number of transits equal to O(n)). It was also useful for C++ coders to use map structure to represent all states."
The C/C++ Code is::
#include <cstdio>
#include <iostream>
using namespace std;
int bc[1<<15];
const int M = (1<<15) - 1;
void setMin(int& ret, int c)
{
if(c < ret) ret = c;
}
void doit(int n, int mask, int currentSteps, int& currentBest)
{
int numMax = bc[mask>>15] + bc[mask&M];
if(numMax == n) {
setMin(currentBest, currentSteps);
return;
}
if(currentSteps + 1 >= currentBest)
return;
if(currentSteps + 2 >= currentBest)
{
if(numMax * 2 >= n) {
setMin(currentBest, 1 + currentSteps);
}
return;
}
if(numMax < (1<<currentSteps)) return;
for(int i=0;i<n;i++)
{
int a = 0, b = 0;
int c = mask;
for(int j=i;j<n;j++)
{
c |= (1<<j);
if(mask&(1<<j)) b++;
else a++;
if(b >= a) {
doit(n, c, currentSteps + 1, currentBest);
}
}
}
}
int v[32];
void solveCase() {
int n;
scanf(" %d", &n);
int maxElement = 0;
for(int i=0;i<n;i++) {
scanf(" %d", v+i);
if(v[i] > maxElement) maxElement = v[i];
}
int mask = 0;
for(int i=0;i<n;i++) if(v[i] == maxElement) mask |= (1<<i);
int ret = 0, p = 1;
while(p < n) {
ret++;
p *= 2;
}
doit(n, mask, 0, ret);
printf("%d\n",ret);
}
main() {
for(int i=0;i<(1<<15);i++) {
bc[i] = bc[i>>1] + (i&1);
}
int cases;
scanf(" %d",&cases);
while(cases--) solveCase();
}

The problem setter approach has exponential complexity. It is pretty good for N=30. But not so for larger sizes. I think, it's more interesting to find an exponential time solution. And I found one, with O(N4) complexity.
This approach uses the fact that optimal solution starts with some group of consecutive maximal elements and extends only this single group until whole array is filled with maximal values.
To prove this fact, take 2 starting groups of consecutive maximal elements and extend each of them in optimal way until they merge into one group. Suppose that group 1 needs X turns to grow to size M, group 2 needs Y turns to grow to the same size M, and on turn X + Y + 1 these groups merge. The result is a group of size at least M * 4. Now instead of turn Y for group 2, make an additional turn X + 1 for group 1. In this case group sizes are at least M * 2 and at most M / 2 (even if we count initially maximal elements, that might be included in step Y). After this change, on turn X + Y + 1 the merged group size is at least M * 4 only as a result of the first group extension, add to this at least one element from second group. So extending a single group here produces larger group in same number of steps (and if Y > 1, it even requires less steps). Since this works for equal group sizes (M), it will work even better for non-equal groups. This proof may be extended to the case of several groups (more than two).
To work with single group of consecutive maximal elements, we need to keep track of only two values: starting and ending positions of the group. Which means it is possible to use a triangular matrix to store all possible groups, allowing to use a dynamic programming algorithm.
Pseudo-code:
For each group of consecutive maximal elements in original array:
Mark corresponding element in the matrix and clear other elements
For each matrix diagonal, starting with one, containing this element:
For each marked element in this diagonal:
Retrieve current number of turns from this matrix element
(use indexes of this matrix element to initialize p1 and p2)
p2 = end of the group
p1 = start of the group
Decrease p1 while it is possible to keep median at maximum value
(now all values between p1 and p2 are assumed as maximal)
While p2 < N:
Check if number of maximal elements in the array is >= N/2
If this is true, compare current number of turns with the best result \
and update it if necessary
(additional matrix with number of maximal values between each pair of
points may be used to count elements to the left of p1 and to the
right of p2)
Look at position [p1, p2] in the matrix. Mark it and if it contains \
larger number of turns, update it
Repeat:
Increase p1 while it points to maximal value
Increment p1 (to skip one non-maximum value)
Increase p2 while it is possible to keep median at maximum value
while median is not at maximum value
To keep algorithm simple, I didn't mention special cases when group starts at position 0 or ends at position N, skipped initialization and didn't make any optimizations.

Related

Recover original array from all subsets

You are given all subset sums of an array. You are then supposed to recover the original array from the subset sums provided.
Every element in the original array is guaranteed to be non-negative and less than 10^5. There are no more than 20 elements in the original array. The original array is also sorted. The input is guaranteed to be valid.
Example 1
If the subset sums provided are this:
0 1 5 6 6 7 11 12
We can quickly deduce that the size of the original array is 3 since there are 8 (2^3) subsets. The output (i.e original array) for the above input is this:
1 5 6
Example 2
Input:
0 1 1 2 8 9 9 10
Output:
1 1 8
What I Tried
Since all elements are guaranteed to be non-negative, the largest integer in the input must be the total of the array. However, I am not sure as to how do I proceed from there. By logic, I thought that the next (2^2 - 1) largest subset sums must include all except one element from the array.
However, the above logic does not work when the original array is this:
1 1 8
That's why I am stuck and am not sure on how to proceed on.
Say S is the subset sum array and A is the original array. I'm assuming S is sorted.
|A| = log2(|S|)
S[0] = 0
S[1] = A[0]
S[2] = A[1]
S[3] = EITHER A[2] OR A[0] + A[1].
In general, S[i] for i >= 3 is either an element of A or a combination of the elements of A that you've already encountered. When processing S, skip once per combination of known elements of A that generate a given number, add any remaining numbers to A. Stop when A gets to the right size.
E.g., if A=[1,2,7,8,9] then S will include [1,2,1+2=3,...,1+8=9, 2+7=9,9,...]. When processing S we skip over two 9s because of 1+8 and 2+7, then see a third 9 which we know must belong to A.
E.g., if S=[0,1,1,2,8,9,9,10] then we know A has 3 elements, that the first 2 elements of A are [1,1], when we get to 2 we skip it because 1+1=2, we append 8 and we're done because we have 3 elements.
Here's an easy algorithm that doesn't require finding which subset sums to a given number.
S ← input sequence
X ← empty sequence
While S has a non-zero element:
d ← second smallest element of S (the smallest one is always zero)
Insert d in X
N ← empty sequence
While S is not empty:
z ← smallest element of S
Remove both z and z+d from S (if S does not contain z+d, it's an error; remove only one instance of both z and z+d if there are several).
Insert z in N.
S ← N
Output X.
I revisited this question a few years later and finally managed to solve it! The approach that I've used to tackle this problem is the same as what Dave had devised earlier. Dave gave a pretty concrete explanation so I'll just add on some details and append my commented C++ code so that it's a bit more clear;
Excluding the empty set, the two smallest elements in S has to be the two smallest elements in A. This is because every element is guaranteed to be non-negative. Having known the values of A[0] and A[1], we have something tangible to work and build bottom-up with.
Following which, any new element in S can either be a summation of the previous elements we have confirmed to be in A or it can an entirely new element in A. (i.e S[3] = A[0] + A[1] or S[3] = A[2]) To keep track of this, we can use a frequency table such as an unordered_map<int, int> in C++. We then repeat this process for S[4], S[5]... to continue filling up A.
To prune our search space, we can stop the moment the size of A corresponds with the size of S. (i.e |A| = log(|S|)/log2). This help us drastically cut unnecessary computation and runtime.
#include <bits/stdc++.h>
using namespace std;
typedef vector<int> vi;
int main () {
int n; cin>>n;
vi S, A, sums;
unordered_map<int, int> freq;
for (int i=0;i<(int) pow(2.0, n);i++) {
int a; cin>>a;
S.push_back(a);
}
sort(S.begin(), S.end());
// edge cases
A.push_back(S[1]);
if (n == 1) {for (auto v : A) cout << v << "\n"; return 0;}
A.push_back(S[2]);
if (n == 2) {for (auto v : A) cout << v << "\n"; return 0;}
sums.push_back(0); sums.push_back(S[1]); sums.push_back(S[2]);
sums.push_back(S[1] + S[2]);
freq[S[1] + S[2]]++; // IMPT: we only need frequency of composite elements
for (int i=3; i < S.size(); i++) {
if (A.size() == n) break; // IMPT: prune the search space
// has to be a new element in A
if (freq[S[i]] == 0) {
// compute the new subset sums with the addition of a new element
vi newsums = sums;
for (int j=0;j<sums.size();j++) {
int y = sums[j] + S[i];
newsums.push_back(y);
if (j != 0) freq[y]++; // IMPT: coz we only need frequency of composite elements
}
// update A and subset sums
sums = newsums;
A.push_back(S[i]);
} else {
// has to be a summation of the previous elements in A
freq[S[i]]--;
}
}
for (auto v : A) cout << v << "\n";
}

Number of subarrays with same 'degree' as the array

So this problem was asked in a quiz and the problem goes like:
You are given an array 'a' with elements ranging from 1-106 and the size of array could be maximum 105 Now we are asked to find the number of subarrays with the same 'degree' as the original array. Degree of an array is defined as the frequency of maximum occurring element in the array. Multiple elements could have the same frequency.
I was stuck in this problem for like an hour but couldn't think of any solution. How do I solve it?
Sample Input:
first-input
1,2,2,3,1
first-output 2
second-input
1,1,2,1,2,2
second-output 4
The element that occurs most frequently is called the mode; this problem defines degree as the frequency count. Your tasks are:
Identify all of the mode values.
For each mode value, find the index range of that value. For instance, in the array
[1, 1, 2, 1, 3, 3, 2, 4, 2, 4, 5, 5, 5]
You have three modes (1 2 5) with a degree of 3. The index ranges are
1 - 0:3
2 - 2:8
5 - 10:12
You need to count all index ranges (subarrays) that include at least one of those three ranges.
I've tailored this example to have both basic cases: modes that overlap, and those that do not. Note that containment is a moot point: if you have an array where one mode's range contains another:
[0, 1, 1, 1, 0, 0]
You can ignore the outer one altogether: any subarray that contains 0 will also contain 1.
ANALYSIS
A subarray is defined by two numbers, the starting and ending indices. Since we must have 0 <= start <= end <= len(array), this is the "handshake" problem between array bounds. We have N(N+1)/2 possible subarrays.
For 10**5 elements, you could just brute-force the problem from here: for each pair of indices, check to see whether that range contains any of the mode ranges. However, you can easily cut that down with interval recognition.
ALGORITHM
Step through the mode ranges, left to right. First, count all subranges that include the first mode range [0:3]. There is only 1 possible starts [0] and 10 possible ends [3:12]; that's 10 subarrays.
Now move to the second mode range, [2:8]. You need to count subarrays that include this, but exclude those you've already counted. Since there's an overlap, you need a starting point later than 0, or an ending point before 3. This second clause is not possible with the given range.
Thus, you consider start [1:2], end [8:12]. That's 2 * 5 more subarrays.
For the third range [10:12 (no overlap), you need a starting point that does not include any other subrange. This means that any starting point [3:10] will do. Since there's only one possible endpoint, you have 8*1, or 8 more subarrays.
Can you turn this into something formal?
Taking reference from leet code
https://leetcode.com/problems/degree-of-an-array/solution/
solve
class Solution {
public int findShortestSubArray(int[] nums) {
Map<Integer, Integer> left = new HashMap(),
right = new HashMap(), count = new HashMap();
for (int i = 0; i < nums.length; i++) {
int x = nums[i];
if (left.get(x) == null) left.put(x, i);
right.put(x, i);
count.put(x, count.getOrDefault(x, 0) + 1);
}
int ans = nums.length;
int degree = Collections.max(count.values());
for (int x: count.keySet()) {
if (count.get(x) == degree) {
ans = Math.min(ans, right.get(x) - left.get(x) + 1);
}
}
return ans;
}
}

Optimal way to find number of operation required to convert all K numbers to lie in the range [L,R] (i.e. L≤x≤R)

I am solving this question which requires some optimized techniques to
solve it. I can think of the brute force method only which requires
combinatorics.
Given an array A consisting of n integers. We call an integer "good"
if it lies in the range [L,R] (i.e. L≤x≤R). We need to make sure if we
pick up any K integers from the array at least one of them should be a
good integer.
For achieving this, in a single operation, we are allowed to
increase/decrease any element of the array by one.
What will be the minimum number of operations we will need for a
fixed k?"
i.e k=1 to n.
input:
L R
1 2
A=[ 1 3 3 ]
output:
for k=1 : 2
for k=2 : 1
for k=3 : 0
For k=1, you have to convert both the 3s into 2s to make sure that if
you select any one of the 3 integers, the selected integer is good.
For k=2, one of the possible ways is to convert one of the 3s into 2.
For k=3, no operation is needed as 1 is a good integer.
As burnpanck has explained in his answer, to make sure that when you pick any k elements in the array, and at least one of them is in range [L,R], we need to make sure that there are at least n - k + 1 numbers in range [L,R] in the array.
So, first , for each element, we calculate the cost to make this element be a valid element (which is in range [L,R]) and store those cost in an array cost.
We notice that:
For k = 1, the minimum cost is the sum of array cost.
For k = 2, the minimum cost is the sum of cost, minus the largest element.
For k = 3, the minimum cost is the sum of cost, minus two largest elements.
...
So, we need to have a prefixSum array, which ith position is the sum of sorted cost array from 0 to ith.
After calculate prefixSum, we can answer result for each k in O(1)
So here is the algo in Java, notice the time complexity is O(n logn):
int[]cost = new int[n];
for(int i = 0; i < n; i++)
cost[i] = //Calculate min cost for element i
Arrays.sort(cost);
int[]prefix = new int[n];
for(int i = 0; i < n; i++)
prefix[i] = cost[i] + (i > 0 ? prefix[i - 1] : 0);
for(int i = n - 1; i >= 0; i--)
System.out.println("Result for k = " + (n - i) + " is " + prefix[i]);
To be sure that from picking k elements will give at least one valid means you should have not more than k-1 invalid in your set. You therefore need to find the shortest way to make enough elements valid. This I would do as follows: In a single pass, generate a map that counts how many elements are in the set that need $n$ operations to be made valid. Then, you clearly want to take those elements that need the least operations, so take the required number of elements in ascending order of required number of operations, and sum the number of operations.
In python:
def min_ops(L,R,A_set):
n_ops = dict() # create an empty mapping
for a in A_set: # loop over all a in the set A_set
n = max(0,max(a-R,L-a)) # the number of operations requied to make a valid
n_ops[n] = n_ops.get(n,0) + 1 # in the mapping, increment the element keyed by *n* by ones. If it does not exist yet, assume it was 0.
allret = [] # create a new list to hold the result for all k
for k in range(1,len(A_set)+1): # iterate over all k in the range [1,N+1) == [1,N]
n_good_required = len(A_set) - k + 1
ret = 0
# iterator over all pairs of keys,values from the mapping, sorted by key.
# The key is the number of ops required, the value the number of elements available
for n,nel in sorted(n_ops.items()):
if n_good_required:
return ret
ret += n * min(nel,n_good_required)
n_good_required -= nel
allret.append(ret) # append the answer for this k to the result list
return allret
As an example:
A_set = [1,3,3,6,8,5,4,7]
L,R = 4,6
For each A, we find how many operations we need to make it valid:
n = [3,1,1,0,2,0,0,1]
(i.e. 1 needs 3 steps, 3 needs one, and so on)
Then we count them:
n_ops = {
0: 3, # we already have three valid elements
1: 3, # three elements that require one op
2: 1,
3: 1, # and finally one that requires 3 ops
}
Now, for each k, we find out how many valid elements we need in the set,
e.g. for k = 4, we need at most 3 invalid in the set of 8, so we need 5 valid ones.
Thus:
ret = 0
n_good_requied = 5
with n=0, we have 3 so take all of them
ret = 0
n_good_required = 2
with n=1, we have 3, but we need just two, so take those
ret = 2
we're finished

Find pairs that sum to X in an array of integers of size N having element in the range 0 to N-1

It is an interview question. We have an array of integers of size N containing element between 0 to N-1. It may be possible that a number can occur more than two times. The goal is to find pairs that sum to a given number X.
I did it using an auxiliary array having count of elements of primary array and then rearranging primary according auxiliary array so that primary is sorted and then searched for pairs.
But interviewer wanted space complexity constant, so I told him to sort the array but it is nlogn time complexity solution. He wanted O(n) solution.
Is there any method available to do it in O(n) without any extra space?
No, I don't believe so. You either need extra space to be able to "sort" the data in O(n) by assigning to buckets, or you need to sort in-place which will not be O(n).
Of course, there are always tricks if you can make certain assumptions. For example, if N < 64K and your integers are 32 bits wide, you can multiplex the space required for the count array on top of the current array.
In other words, use the lower 16 bits for storing the values in the array and then use the upper 16 bits for your array where you simply store the count of values matching the index.
Let's use a simplified example where N == 8. Hence the array is 8 elements in length and the integers at each element are less than 8, though they're eight bits wide. That means (initially) the top four bits of each element are zero.
0 1 2 3 4 5 6 7 <- index
(0)7 (0)6 (0)2 (0)5 (0)3 (0)3 (0)7 (0)7
The pseudo-code for an O(n) adjustment which stores the count into the upper four bits is:
for idx = 0 to N:
array[array[idx] % 16] += 16 // add 1 to top four bits
By way of example, consider the first index which stores 7. That assignment statement will therefore add 16 to index 7, upping the count of sevens. The modulo operator is to ensure that values which have already been increased only use the lower four bits to specify the array index.
So the array eventually becomes:
0 1 2 3 4 5 6 7 <- index
(0)7 (0)6 (1)2 (2)5 (0)3 (1)3 (1)7 (3)7
Then you have your new array in constant space and you can just use int (array[X] / 16) to get the count of how many X values there were.
But, that's pretty devious and requires certain assumptions as mentioned before. It may well be that level of deviousness the interviewer was looking for, or they may just want to see how a prospective employee handle the Kobayashi Maru of coding :-)
Once you have the counts, it's a simple matter to find pairs that sum to a given X, still in O(N). The basic approach would be to get the cartestian product. For example, again consider that N is 8 and you want pairs that sum to 8. Ignore the lower half of the multiplexed array above (since you're only interested in the counts, you have:
0 1 2 3 4 5 6 7 <- index
(0) (0) (1) (2) (0) (1) (1) (3)
What you basically do is step through the array one by one getting the product of the counts of numbers that sum to 8.
For 0, you would need to add 8 (which doesn't exist).
For 1, you need to add 7. The product of the counts is 0 x 3, so that gives nothing.
For 2, you need to add 6. The product of the counts is 1 x 1, so that gives one occurrence of (2,6).
For 3, you need to add 5. The product of the counts is 2 x 1, so that gives two occurrences of (3,5).
For 4, it's a special case since you can't use the product. In this case it doesn't matter since there are no 4s but, if there was one, that couldn't become a pair. Where the numbers you're pairing are the same, the formula is (assuming there are m of them) 1 + 2 + 3 + ... + m-1. With a bit of mathematical widardry, that turns out to be m(m-1)/2.
Beyond that, you're pairing with values to the left, which you've already done so you stop.
So what you have ended up with from
a b c d e f g h <- identifiers
7 6 2 5 3 3 7 7
is:
(2,6) (3,5) (3,5)
(c,b) (e,d) (f,d) <- identifiers
No other values add up to 8.
The following program illustrates this in operation:
#include <stdio.h>
int arr[] = {3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 4, 4, 4, 4};
#define SZ (sizeof(arr) / sizeof(*arr))
static void dumpArr (char *desc) {
int i;
printf ("%s:\n Indexes:", desc);
for (i = 0; i < SZ; i++) printf (" %2d", i);
printf ("\n Counts :");
for (i = 0; i < SZ; i++) printf (" %2d", arr[i] / 100);
printf ("\n Values :");
for (i = 0; i < SZ; i++) printf (" %2d", arr[i] % 100);
puts ("\n=====\n");
}
That bit above is just for debugging. The actual code to do the bucket sort is below:
int main (void) {
int i, j, find, prod;
dumpArr ("Initial");
// Sort array in O(1) - bucket sort.
for (i = 0; i < SZ; i++) {
arr[arr[i] % 100] += 100;
}
And we finish with the code to do the pairings:
dumpArr ("After bucket sort");
// Now do pairings.
find = 8;
for (i = 0, j = find - i; i <= j; i++, j--) {
if (i == j) {
prod = (arr[i]/100) * (arr[i]/100-1) / 2;
if (prod > 0) {
printf ("(%d,%d) %d time(s)\n", i, j, prod);
}
} else {
if ((j >= 0) && (j < SZ)) {
prod = (arr[i]/100) * (arr[j]/100);
if (prod > 0) {
printf ("(%d,%d) %d time(s)\n", i, j, prod);
}
}
}
}
return 0;
}
The output is:
Initial:
Indexes: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Counts : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Values : 3 1 4 1 5 9 2 6 5 3 5 8 9 4 4 4 4
=====
After bucket sort:
Indexes: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Counts : 0 2 1 2 5 3 1 0 1 2 0 0 0 0 0 0 0
Values : 3 1 4 1 5 9 2 6 5 3 5 8 9 4 4 4 4
=====
(2,6) 1 time(s)
(3,5) 6 time(s)
(4,4) 10 time(s)
and, if you examine the input digits, you'll find the pairs are correct.
This may be done by converting the input array to the list of counters "in-place" in O(N) time. Of course this assumes input array is not immutable. There is no need for any additional assumptions about unused bits in each array element.
Start with the following pre-processing: try to move each array's element to the position determined by element's value; move element on this position also to the position determined by its value; continue until:
next element is moved to the position from where this cycle was started,
next element cannot be moved because it is already on the position corresponding to its value (in this case put current element to the position from where this cycle was started).
After pre-processing every element either is located at its "proper" position or "points" to its "proper" position. In case we have an unused bit in each element, we could convert each properly positioned element into a counter, initialize it with "1", and allow each "pointing" element to increase appropriate counter. Additional bit allows to distinguish counters from values. The same thing may be done without any additional bits but with less trivial algorithm.
Count how may values in the array are equal to 0 or 1. If there are any such values, reset them to zero and update counters at positions 0 and/or 1. Set k=2 (size of the array's part that has values less than k replaced by counters). Apply the following procedure for k = 2, 4, 8, ...
Find elements at positions k .. 2k-1 which are at their "proper" position, replace them with counters, initial value is "1".
For any element at positions k .. 2k-1 with values 2 .. k-1 update corresponding counter at positions 2 .. k-1 and reset value to zero.
For any element at positions 0 .. 2k-1 with values k .. 2k-1 update corresponding counter at positions k .. 2k-1 and reset value to zero.
All iterations of this procedure together have O(N) time complexity. At the end the input array is completely converted to the array of counters. The only difficulty here is that up to two counters at positions 0 .. 2k-1 may have values greater than k-1. But this could be mitigated by storing two additional indexes for each of them and processing elements at these indexes as counters instead of values.
After an array of counters is produced, we could just multiply pairs of counters (where corresponding pair of indexes sum to X) to get the required counts of pairs.
String sorting is n log n however if you can assume the numbers are bounded (and you can because you're only interested in numbers that sum to a certain value) you can use a Radix sort. Radix sort takes O(kN) time, where "k" is the length of the key. That's a constant in your case, so I think it's fair to say O(N).
Generally I would however solve this using a hash e.g.
http://41j.com/blog/2012/04/find-items-in-an-array-that-sum-to-15/
Though that is of course not a linear time solution.

Finding the maximum subsequence binary sets that have an equal number of 1s and 0s

I found the following problem on the internet, and would like to know how I would go about solving it:
You are given an array ' containing 0s and 1s. Find O(n) time and O(1) space algorithm to find the maximum sub sequence which has equal number of 1s and 0s.
Examples:
10101010 -
The longest sub sequence that satisfies the problem is the input itself
1101000 -
The longest sub sequence that satisfies the problem is 110100
Update.
I have to completely rephrase my answer. (If you had upvoted the earlier version, well, you were tricked!)
Lets sum up the easy case again, to get it out of the way:
Find the longest prefix of the bit-string containing
an equal number of 1s and 0s of the
array.
This is trivial: A simple counter is needed, counting how many more 1s we have than 0s, and iterating the bitstring while maintaining this. The position where this counter becomes zero for the last time is the end of the longest sought prefix. O(N) time, O(1) space. (I'm completely convinced by now that this is what the original problem asked for. )
Now lets switch to the more difficult version of the problem: we no longer require subsequences to be prefixes - they can start anywhere.
After some back and forth thought, I thought there might be no linear algorithm for this. For example, consider the prefix "111111111111111111...". Every single 1 of those may be the start of the longest subsequence, there is no candidate subsequence start position that dominates (i.e. always gives better solutions than) any other position, so we can't throw away any of them (O(N) space) and at any step, we must be able to select the best start (which has an equal number of 1s and 0s to the current position) out of linearly many candidates, in O(1) time. It turns out this is doable, and easily doable too, since we can select the candidate based on the running sum of 1s (+1) and 0s (-1), this has at most size N, and we can store the first position we reach each sum in 2N cells - see pmod's answer below (yellowfog's comments and geometric insight too).
Failing to spot this trick, I had replaced a fast but wrong with a slow but sure algorithm, (since correct algorithms are preferable to wrong ones!):
Build an array A with the accumulated number of 1s from the start to that position, e.g. if the bitstring is "001001001", then the array would be [0, 0, 1, 1, 1, 2, 2, 2, 3]. Using this, we can test in O(1) whether the subsequence (i,j), inclusive, is valid: isValid(i, j) = (j - i + 1 == 2 * (A[j] - A[i - 1]), i.e. it is valid if its length is double the amount of 1s in it. For example, the subsequence (3,6) is valid because 6 - 3 + 1 == 2 * A[6] - A[2] = 4.
Plain old double loop:
maxSubsLength = 0
for i = 1 to N - 1
for j = i + 1 to N
if isValid(i, j) ... #maintain maxSubsLength
end
end
This can be sped up a bit using some branch-and-bound by skipping i/j sequences which are shorter than the current maxSubsLength, but asymptotically this is still O(n^2). Slow, but with a big plus on its side: correct!
Strictly speaking, the answer is that no such algorithm exists because the language of strings consisting of an equal number of zeros and ones is not regular.
Of course everyone ignores that fact that storing an integer of magnitude n is O(log n) in space and treats it as O(1) in space. :-) Pretty much all big-O's, including time ones, are full of (or rather empty of) missing log n factors, or equivalently, they assume n is bounded by the size of a machine word, which means you're really looking at a finite problem and everything is O(1).
New solution:
Suppose we have for n-bit input bit-array 2*n-size array to keep position of bit. So, the size of array element must have enough size to keep maximum position number. For 256 input bit array, it's needed 256x2 array of bytes (byte is enough to keep 255 - the maximum position).
Moving from the first position of bit-array we put the position into array starting from the middle of array (index is n) using a rule:
1. Increment the position if we passed "1" bit and decrement when passed "0" bit
2. When meet already initialized array element - don't change it and remember the difference between positions (current minus taken from array element) - this is a size of local maximum sequence.
3. Every time we meet local maximum compare it with the global maximum and update if the latter is less.
For example: bit sequence is 0,0,0,1,0,1
initial array index is n
set arr[n] = 0 (position)
bit 0 -> index--
set arr[n-1] = 1
bit 0 -> index--
set arr[n-2] = 2
bit 0 -> index--
set arr[n-3] = 3
bit 1 -> index++
arr[n-2] already contains 2 -> thus, local max seq is [3,2] becomes abs. maximum
will not overwrite arr[n-2]
bit 0 -> index--
arr[n-3] already contains 3 -> thus, local max seq is [4,3] is not abs. maximum
bit 1 -> index++
arr[n-2] already contains 2 -> thus, local max seq is [5,2] is abs. max
Thus, we passing through the whole bit array only once.
Does this solves the task?
input:
n - number of bits
a[n] - input bit-array
track_pos[2*n] = {0,};
ind = n;
/* start from position 1 since zero has
meaning track_pos[x] is not initialized */
for (i = 1; i < n+1; i++) {
if (track_pos[ind]) {
seq_size = i - track_pos[ind];
if (glob_seq_size < seq_size) {
/* store as interm. result */
glob_seq_size = seq_size;
glob_pos_from = track_pos[ind];
glob_pos_to = i;
}
} else {
track_pos[ind] = i;
}
if (a[i-1])
ind++;
else
ind--;
}
output:
glob_seq_size - length of maximum sequence
glob_pos_from - start position of max sequence
glob_pos_to - end position of max sequence
In this thread ( http://discuss.techinterview.org/default.asp?interview.11.792102.31 ), poster A.F. has given an algorithm that runs in O(n) time and uses O(sqrt(n log n)) bits.
brute force: start with maximum length of the array to count the o's and l's. if o eqals l, you are finished. else reduce search length by 1 and do the algorithm for all subsequences of the reduced length (that is maximium length minus reduced length) and so on. stop when the subtraction is 0.
As was pointed out by user "R..", there is no solution, strictly speaking, unless you ignore the "log n" space complexity. In the following, I will consider that the array length fits in a machine register (e.g. a 64-bit word) and that a machine register has size O(1).
The important point to notice is that if there are more 1's than 0's, then the maximum subsequence that you are looking for necessarily includes all the 0's, and that many 1's. So here the algorithm:
Notations: the array has length n, indices are counted from 0 to n-1.
First pass: count the number of 1's (c1) and 0's (c0). If c1 = c0 then your maximal subsequence is the entire array (end of algorithm). Otherwise, let d be the digit which appears the less often (d = 0 if c0 < c1, otherwise d = 1).
Compute m = min(c0, c1) * 2. This is the size of the subsequence you are looking for.
Second pass: scan the array to find the index j of the first occurrence of d.
Compute k = max(j, n - m). The subsequence starts at index k and has length m.
Note that there could be several solutions (several subsequences of maximal length which match the criterion).
In plain words: assuming that there are more 1's than 0's, then I consider the smallest subsequence which contains all the 0's. By definition, that subsequence is surrounded by bunches of 1's. So I just grab enough 1's from the sides.
Edit: as was pointed out, this does not work... The "important point" is actually wrong.
Try something like this:
/* bit(n) is a macro that returns the nth bit, 0 or 1. len is number of bits */
int c[2] = {0,0};
int d, i, a, b, p;
for(i=0; i<len; i++) c[bit(i)]++;
d = c[1] < c[0];
if (c[d] == 0) return; /* all bits identical; fail */
for(i=0; bit(i)!=d; i++);
a = b = i;
for(p=0; i<len; i++) {
p += 2*bit(i)-1;
if (!p) b = i;
}
if (a == b) { /* account for case where we need bits before the first d */
b = len - 1;
a -= abs(p);
}
printf("maximal subsequence consists of bits %d through %d\n", a, b);
Completely untested but modulo stupid mistakes it should work. Based on my reply to Thomas's answer which failed in certain cases.
New Solution:
Space complexity of O(1) and time complexity O(n^2)
int iStart = 0, iEnd = 0;
int[] arrInput = { 1, 0, 1, 1, 1,0,0,1,0,1,0,0 };
for (int i = 0; i < arrInput.Length; i++)
{
int iCurrEndIndex = i;
int iSum = 0;
for (int j = i; j < arrInput.Length; j++)
{
iSum = (arrInput[j] == 1) ? iSum+1 : iSum-1;
if (iSum == 0)
{
iCurrEndIndex = j;
}
}
if ((iEnd - iStart) < (iCurrEndIndex - i))
{
iEnd = iCurrEndIndex;
iStart = i;
}
}
I am not sure whether the array you are referring is int array of 0's and 1's or bitarray??
If its about bitarray, here is my approach:
int isEvenBitCount(int n)
{
//n ... //Decimal equivalent of the input binary sequence
int cnt1 = 0, cnt0 = 0;
while(n){
if(n&0x01) { printf("1 "); cnt1++;}
else { printf("0 "); cnt0++; }
n = n>>1;
}
printf("\n");
return cnt0 == cnt1;
}
int main()
{
int i = 40, j = 25, k = 35;
isEvenBitCount(i)?printf("-->Yes\n"):printf("-->No\n");
isEvenBitCount(j)?printf("-->Yes\n"):printf("-->No\n");
isEvenBitCount(k)?printf("-->Yes\n"):printf("-->No\n");
}
with use of bitwise operations the time complexity is almost O(1) also.

Resources