One Pointer Search vs Two Pointer Search - arrays

Which algorithm is better in terms of time complexity and space complexity ?
temp_list = [2,4,1,3,7,1,4,2,9,5,6,8]
def get_item_from_list1(collection, target):
collection_length: int = len(collection)
if collection_length == 0:
return None
# two pointer algorithm to find the element in array
left_index: int = 0
right_index: int = collection_length - 1
while left_index <= right_index:
if collection[left_index] == target:
return collection[left_index]
if collection[right_index] == target:
return collection[right_index]
left_index += 1
right_index -= 1
return None
def get_item_from_list2(collection, target):
for item in collection:
if item == target:
return item
return None
get_item_from_list1(temp_list, 9)
get_item_from_list2(temp_list, 9)
i am expecting two pointer search algorithm to perform better on large list.

Related

Algorithm for finding a combination of integers greater than a specified value

I've been trying to develop an algorithm that would take an input array and return an array such that the integers contained within are the combination of integers with the smallest sum greater than a specified value (limited to a combination of size k).
For instance, if I have the array [1,4,5,10,17,34] and I specified a minimum sum of 31, the function would return [1,4,10,17]. Or, if I wanted it limited to a max array size of 2, it would just return [34].
Is there an efficient way to do this? Any help would be appreciated!
Something like this? It returns the value, but could easily be adapted to return the sequence.
Algorithm: assuming sorted input, test the k-length combinations for the smallest sum greater than min, stop after the first array element greater than min.
JavaScript:
var roses = [1,4,5,10,17,34]
function f(index,current,k,best,min,K)
{
if (roses.length == index)
return best
for (var i = index; i < roses.length; i++)
{
var candidate = current + roses[i]
if (candidate == min + 1)
return candidate
if (candidate > min)
best = best < 0 ? candidate : Math.min(best,candidate)
if (roses[i] > min)
break
if (k + 1 < K)
{
var nextCandidate = f(i + 1,candidate,k + 1,best,min,K)
if (nextCandidate > min)
best = best < 0 ? nextCandidate : Math.min(best,nextCandidate)
if (best == min + 1)
return best
}
}
return best
}
Output:
console.log(f(0,0,0,-1,31,3))
32
console.log(f(0,0,0,-1,31,2))
34
This is more of a hybrid solution, with Dynamic Programming and Back Tracking. We can use Back Tracking alone to solve this problem, but then we have to do exhaustive searching (2^N) to find the solution. The DP part optimizes the search space in Back Tracking.
import sys
from collections import OrderedDict
MinimumSum = 31
MaxArraySize = 4
InputData = sorted([1,4,5,10,17,34])
# Input part is over
Target = MinimumSum + 1
Previous, Current = OrderedDict({0:0}), OrderedDict({0:0})
for Number in InputData:
for CurrentNumber, Count in Previous.items():
if Number + CurrentNumber in Current:
Current[Number + CurrentNumber] = min(Current[Number + CurrentNumber], Count + 1)
else:
Current[Number + CurrentNumber] = Count + 1
Previous = Current.copy()
FoundSolution = False
for Number, Count in Previous.items():
if (Number >= Target and Count < MaxArraySize):
MaxArraySize = Count
Target = Number
FoundSolution = True
break
if not FoundSolution:
print "Not possible"
sys.exit(0)
else:
print Target, MaxArraySize
FoundSolution = False
Solution = []
def Backtrack(CurrentIndex, Sum, MaxArraySizeUsed):
global FoundSolution
if (MaxArraySizeUsed <= MaxArraySize and Sum == Target):
FoundSolution = True
return
if (CurrentIndex == len(InputData) or MaxArraySizeUsed > MaxArraySize or Sum > Target):
return
for i in range(CurrentIndex, len(InputData)):
Backtrack(i + 1, Sum, MaxArraySizeUsed)
if (FoundSolution): return
Backtrack(i + 1, Sum + InputData[i], MaxArraySizeUsed + 1)
if (FoundSolution):
Solution.append(InputData[i])
return
Backtrack(0, 0, 0)
print sorted(Solution)
Note: As per the examples given by you in the question, Minimum sum and Maximum Array Size are strictly greater and lesser than the values specified, respectively.
For this input
MinimumSum = 31
MaxArraySize = 4
InputData = sorted([1,4,5,10,17,34])
Output is
[5, 10, 17]
where as, for this input
MinimumSum = 31
MaxArraySize = 3
InputData = sorted([1,4,5,10,17,34])
Output is
[34]
Explanation
Target = MinimumSum + 1
Previous, Current = OrderedDict({0:0}), OrderedDict({0:0})
for Number in InputData:
for CurrentNumber, Count in Previous.items():
if Number + CurrentNumber in Current:
Current[Number + CurrentNumber] = min(Current[Number + CurrentNumber], Count + 1)
else:
Current[Number + CurrentNumber] = Count + 1
Previous = Current.copy()
This part of the program finds the minimum number of numbers from the input data, required to make the sum of numbers from 1 to the maximum possible number (which is the sum of all the input data). Its a dynamic programming solution, for knapsack problem. You can read about that in the internet.
FoundSolution = False
for Number, Count in Previous.items():
if (Number >= Target and Count < MaxArraySize):
MaxArraySize = Count
Target = Number
FoundSolution = True
break
if not FoundSolution:
print "Not possible"
sys.exit(0)
else:
print Target, MaxArraySize
This part of the program, finds the Target value which matches the MaxArraySize criteria.
def Backtrack(CurrentIndex, Sum, MaxArraySizeUsed):
global FoundSolution
if (MaxArraySizeUsed <= MaxArraySize and Sum == Target):
FoundSolution = True
return
if (CurrentIndex == len(InputData) or MaxArraySizeUsed > MaxArraySize or Sum > Target):
return
for i in range(CurrentIndex, len(InputData)):
Backtrack(i + 1, Sum, MaxArraySizeUsed)
if (FoundSolution): return
Backtrack(i + 1, Sum + InputData[i], MaxArraySizeUsed + 1)
if (FoundSolution):
Solution.append(InputData[i])
return
Backtrack(0, 0, 0)
Now that we know that the solution exists, we want to recreate the solution. We use backtracking technique here. You can easily find lot of good tutorials about this also in the internet.

Space-efficient algorithm for finding the largest balanced subarray?

given an array of 0s and 1s, find maximum subarray such that number of zeros and 1s are equal.
This needs to be done in O(n) time and O(1) space.
I have an algo which does it in O(n) time and O(n) space. It uses a prefix sum array and exploits the fact that if the number of 0s and 1s are same then
sumOfSubarray = lengthOfSubarray/2
#include<iostream>
#define M 15
using namespace std;
void getSum(int arr[],int prefixsum[],int size) {
int i;
prefixsum[0]=arr[0]=0;
prefixsum[1]=arr[1];
for (i=2;i<=size;i++) {
prefixsum[i]=prefixsum[i-1]+arr[i];
}
}
void find(int a[],int &start,int &end) {
while(start < end) {
int mid = (start +end )/2;
if((end-start+1) == 2 * (a[end] - a[start-1]))
break;
if((end-start+1) > 2 * (a[end] - a[start-1])) {
if(a[start]==0 && a[end]==1)
start++; else
end--;
} else {
if(a[start]==1 && a[end]==0)
start++; else
end--;
}
}
}
int main() {
int size,arr[M],ps[M],start=1,end,width;
;
cin>>size;
arr[0]=0;
end=size;
for (int i=1;i<=size;i++)
cin>>arr[i];
getSum(arr,ps,size);
find(ps,start,end);
if(start!=end)
cout<<(start-1)<<" "<<(end-1)<<endl; else cout<<"No soln\n";
return 0;
}
Now my algorithm is O(n) time and O(Dn) space where Dn is the total imblance in the list.
This solution doesn't modify the list.
let D be the difference of 1s and 0s found in the list.
First, let's step linearily through the list and calculate D, just to see how it works:
I'm gonna use this list as an example : l=1100111100001110
Element D
null 0
1 1
1 2 <-
0 1
0 0
1 1
1 2
1 3
1 4
0 3
0 2
0 1
0 0
1 1
1 2
1 3
0 2 <-
Finding the longest balanced subarray is equivalent to finding 2 equal elements in D that are the more far appart. (in this example the 2 2s marked with arrows.)
The longest balanced subarray is between first occurence of element +1 and last occurence of element. (first arrow +1 and last arrow : 00111100001110)
Remark:
The longest subarray will always be between 2 elements of D that are
between [0,Dn] where Dn is the last element of D. (Dn = 2 in the
previous example) Dn is the total imbalance between 1s and 0s in the
list. (or [Dn,0] if Dn is negative)
In this example it means that I don't need to "look" at 3s or 4s
Proof:
Let Dn > 0 .
If there is a subarray delimited by P (P > Dn). Since 0 < Dn < P,
before reaching the first element of D which is equal to P we reach one
element equal to Dn. Thus, since the last element of the list is equal to Dn, there is a longest subarray delimited by Dns than the one delimited by Ps.And therefore we don't need to look at Ps
P cannot be less than 0 for the same reasons
the proof is the same for Dn <0
Now let's work on D, D isn't random, the difference between 2 consecutive element is always 1 or -1. Ans there is an easy bijection between D and the initial list. Therefore I have 2 solutions for this problem:
the first one is to keep track of first and last appearance of each
element in D that are between 0 and Dn (cf remark).
second is to transform the list into D, and then work on D.
FIRST SOLUTION
For the time being I cannot find a better approach than the first one:
First calculate Dn (in O(n)) . Dn=2
Second instead of creating D, create a dictionnary where the keys are the value of D (between [0 and Dn]) and the value of each keys is a couple (a,b) where a is the first occurence of the key and b the last.
Element D DICTIONNARY
null 0 {0:(0,0)}
1 1 {0:(0,0) 1:(1,1)}
1 2 {0:(0,0) 1:(1,1) 2:(2,2)}
0 1 {0:(0,0) 1:(1,3) 2:(2,2)}
0 0 {0:(0,4) 1:(1,3) 2:(2,2)}
1 1 {0:(0,4) 1:(1,5) 2:(2,2)}
1 2 {0:(0,4) 1:(1,5) 2:(2,6)}
1 3 { 0:(0,4) 1:(1,5) 2:(2,6)}
1 4 {0:(0,4) 1:(1,5) 2:(2,6)}
0 3{0:(0,4) 1:(1,5) 2:(2,6) }
0 2 {0:(0,4) 1:(1,5) 2:(2,9) }
0 1 {0:(0,4) 1:(1,10) 2:(2,9) }
0 0 {0:(0,11) 1:(1,10) 2:(2,9) }
1 1 {0:(0,11) 1:(1,12) 2:(2,9) }
1 2 {0:(0,11) 1:(1,12) 2:(2,13)}
1 3 {0:(0,11) 1:(1,12) 2:(2,13)}
0 2 {0:(0,11) 1:(1,12) 2:(2,15)}
and you chose the element with the largest difference : 2:(2,15) and is l[3:15]=00111100001110 (with l=1100111100001110).
Time complexity :
2 passes, the first one to caclulate Dn, the second one to build the
dictionnary.
find the max in the dictionnary.
Total is O(n)
Space complexity:
the current element in D : O(1) the dictionnary O(Dn)
I don't take 3 and 4 in the dictionnary because of the remark
The complexity is O(n) time and O(Dn) space (in average case Dn <<
n).
I guess there is may be a better way than a dictionnary for this approach.
Any suggestion is welcome.
Hope it helps
SECOND SOLUTION (JUST AN IDEA NOT THE REAL SOLUTION)
The second way to proceed would be to transform your list into D. (since it's easy to go back from D to the list it's ok). (O(n) time and O(1) space, since I transform the list in place, even though it might not be a "valid" O(1) )
Then from D you need to find the 2 equal element that are the more far appart.
it looks like finding the longest cycle in a linked list, A modification of Richard Brent algorithm might return the longest cycle but I don't know how to do it, and it would take O(n) time and O(1) space.
Once you find the longest cycle, go back to the first list and print it.
This algorithm would take O(n) time and O(1) space complexity.
Different approach but still O(n) time and memory. Start with Neil's suggestion, treat 0 as -1.
Notation: A[0, …, N-1] - your array of size N, f(0)=0, f(x)=A[x-1]+f(x-1) - a function
If you'd plot f, you'll see, that what you look for are points for which f(m)=f(n), m=n-2k where k-positive natural. More precisely, only for x such that A[x]!=A[x+1] (and the last element in an array) you must check whether f(x) already occurred. Unfortunately, now I see no improvement over having array B[-N+1…N-1] where such information would be stored.
To complete my thought: B[x]=-1 initially, B[x]=p when p = min k: f(k)=x . And the algorithm is (double-check it, as I'm very tired):
fx = 0
B = new array[-N+1, …, N-1]
maxlen = 0
B[0]=0
for i=1…N-1 :
fx = fx + A[i-1]
if B[fx]==-1 :
B[fx]=i
else if ((i==N-1) or (A[i-1]!=A[i])) and (maxlen < i-B[fx]):
We found that A[B[fx], …, i] is best than what we found so far
maxlen = i-B[fx]
Edit: Two bed-thoughts (= figured out while laying in bed :P ):
1) You could binary search the result by the length of subarray, which would give O(n log n) time and O(1) memory algorithm. Let's use function g(x)=x - x mod 2 (because subarrays which sum to 0 are always of even length). Start by checking, if the whole array sums to 0. If yes -- we're done, otherwise continue. We now assume 0 as starting point (we know there's subarray of such length and "summing-to-zero property") and g(N-1) as ending point (we know there's no such subarray). Let's do
a = 0
b = g(N-1)
while a<b :
c = g((a+b)/2)
check if there is such subarray in O(n) time
if yes:
a = c
if no:
b = c
return the result: a (length of maximum subarray)
Checking for subarray with "summing-to-zero property" of some given length L is simple:
a = 0
b = L
fa = fb = 0
for i=0…L-1:
fb = fb + A[i]
while (fa != fb) and (b<N) :
fa = fa + A[a]
fb = fb + A[b]
a = a + 1
b = b + 1
if b==N:
not found
found, starts at a and stops at b
2) …can you modify input array? If yes and if O(1) memory means exactly, that you use no additional space (except for constant number of elements), then just store your prefix table values in your input array. No more space used (except for some variables) :D
And again, double check my algorithms as I'm veeery tired and could've done off-by-one errors.
Like Neil, I find it useful to consider the alphabet {±1} instead of {0, 1}. Assume without loss of generality that there are at least as many +1s as -1s. The following algorithm, which uses O(sqrt(n log n)) bits and runs in time O(n), is due to "A.F."
Note: this solution does not cheat by assuming the input is modifiable and/or has wasted bits. As of this edit, this solution is the only one posted that is both O(n) time and o(n) space.
A easier version, which uses O(n) bits, streams the array of prefix sums and marks the first occurrence of each value. It then scans backward, considering for each height between 0 and sum(arr) the maximal subarray at that height. Some thought reveals that the optimum is among these (remember the assumption). In Python:
sum = 0
min_so_far = 0
max_so_far = 0
is_first = [True] * (1 + len(arr))
for i, x in enumerate(arr):
sum += x
if sum < min_so_far:
min_so_far = sum
elif sum > max_so_far:
max_so_far = sum
else:
is_first[1 + i] = False
sum_i = 0
i = 0
while sum_i != sum:
sum_i += arr[i]
i += 1
sum_j = sum
j = len(arr)
longest = j - i
for h in xrange(sum - 1, -1, -1):
while sum_i != h or not is_first[i]:
i -= 1
sum_i -= arr[i]
while sum_j != h:
j -= 1
sum_j -= arr[j]
longest = max(longest, j - i)
The trick to get the space down comes from noticing that we're scanning is_first sequentially, albeit in reverse order relative to its construction. Since the loop variables fit in O(log n) bits, we'll compute, instead of is_first, a checkpoint of the loop variables after each O(√(n log n)) steps. This is O(n/√(n log n)) = O(√(n/log n)) checkpoints, for a total of O(√(n log n)) bits. By restarting the loop from a checkpoint, we compute on demand each O(√(n log n))-bit section of is_first.
(P.S.: it may or may not be my fault that the problem statement asks for O(1) space. I sincerely apologize if it was I who pulled a Fermat and suggested that I had a solution to a problem much harder than I thought it was.)
If indeed your algorithm is valid in all cases (see my comment to your question noting some corrections to it), notice that the prefix array is the only obstruction to your constant memory goal.
Examining the find function reveals that this array can be replaced with two integers, thereby eliminating the dependence on the length of the input and solving your problem. Consider the following:
You only depend on two values in the prefix array in the find function. These are a[start - 1] and a[end]. Yes, start and end change, but does this merit the array?
Look at the progression of your loop. At the end, start is incremented or end is decremented only by one.
Considering the previous statement, if you were to replace the value of a[start - 1] by an integer, how would you update its value? Put another way, for each transition in the loop that changes the value of start, what could you do to update the integer accordingly to reflect the new value of a[start - 1]?
Can this process can be repeated with a[end]?
If, in fact, the values of a[start - 1] and a[end] can be reflected with two integers, doesn't the whole prefix array no longer serve a purpose? Can't it therefore be removed?
With no need for the prefix array and all storage dependencies on the length of the input removed, your algorithm will use a constant amount of memory to achieve its goal, thereby making it O(n) time and O(1) space.
I would prefer you solve this yourself based on the insights above, as this is homework. Nevertheless, I have included a solution below for reference:
#include <iostream>
using namespace std;
void find( int *data, int &start, int &end )
{
// reflects the prefix sum until start - 1
int sumStart = 0;
// reflects the prefix sum until end
int sumEnd = 0;
for( int i = start; i <= end; i++ )
sumEnd += data[i];
while( start < end )
{
int length = end - start + 1;
int sum = 2 * ( sumEnd - sumStart );
if( sum == length )
break;
else if( sum < length )
{
// sum needs to increase; get rid of the lower endpoint
if( data[ start ] == 0 && data[ end ] == 1 )
{
// sumStart must be updated to reflect the new prefix sum
sumStart += data[ start ];
start++;
}
else
{
// sumEnd must be updated to reflect the new prefix sum
sumEnd -= data[ end ];
end--;
}
}
else
{
// sum needs to decrease; get rid of the higher endpoint
if( data[ start ] == 1 && data[ end ] == 0 )
{
// sumStart must be updated to reflect the new prefix sum
sumStart += data[ start ];
start++;
}
else
{
// sumEnd must be updated to reflect the new prefix sum
sumEnd -= data[ end ];
end--;
}
}
}
}
int main() {
int length;
cin >> length;
// get the data
int data[length];
for( int i = 0; i < length; i++ )
cin >> data[i];
// solve and print the solution
int start = 0, end = length - 1;
find( data, start, end );
if( start == end )
puts( "No soln" );
else
printf( "%d %d\n", start, end );
return 0;
}
This algorithm is O(n) time and O(1) space. It may modify the source array, but it restores all the information back. So it is not working with const arrays. If this puzzle has several solutions, this algorithm picks the solution nearest to the array beginning. Or it might be modified to provide all solutions.
Algorithm
Variables:
p1 - subarray start
p2 - subarray end
d - difference of 1s and 0s in the subarray
Calculate d, if d==0, stop. If d<0, invert the array and after balanced subarray is found invert it back.
While d > 0 advance p2: if the array element is 1, just decrement both p2 and d. Otherwise p2 should pass subarray of the form 11*0, where * is some balanced subarray. To make backtracking possible, 11*0? is changed to 0?*00 (where ? is the value next to the subarray). Then d is decremented.
Store p1 and p2.
Backtrack p2: if the array element is 1, just increment p2. Otherwise we found element, changed on step 2. Revert the changes and pass subarray of the form 11*0.
Advance p1: if the array element is 1, just increment p1. Otherwise p1 should pass subarray of the form 0*11.
Store p1 and p2, if p2 - p1 improved.
If p2 is at the end of the array, stop. Otherwise continue with step 4.
How does it work
Algorithm iterates through all possible positions of the balanced subarray in the input array. For each subarray position p1 and p2 are kept as far from each other as possible, providing locally longest subarray. Subarray with maximum length is chosen between all these subarrays.
To determine the next best position for p1, it is advanced to the first position where the balance between 1s and 0s is changed by one. (Step 5).
To determine the next best position for p2, it is advanced to the last position where the balance between 1s and 0s is changed by one. To make it possible, step 2 detects all such positions (starting from the array's end) and modifies the array in such a way, that it is possible to iterate through these positions with linear search. (Step 4).
While performing step 2, two possible conditions may be met. Simple one: when value '1' is found; pointer p2 is just advanced to the next value, no special treatment needed. But when value '0' is found, balance is going in wrong direction, it is necessary to pass through several bits until correct balance is found. All these bits are of no interest to the algorithm, stopping p2 there will give either a balanced subarray, which is too short, or a disbalanced subarray. As a result, p2 should pass subarray of the form 11*0 (from right to left, * means any balanced subarray). There is no chance to go the same way in other direction. But it is possible to temporary use some bits from the pattern 11*0 to allow backtracking. If we change first '1' to '0', second '1' to the value next to the rightmost '0', and clear the value next to the rightmost '0': 11*0? -> 0?*00, then we get the possibility to (first) notice the pattern on the way back, since it starts with '0', and (second) find the next good position for p2.
C++ code:
#include <cstddef>
#include <bitset>
static const size_t N = 270;
void findLargestBalanced(std::bitset<N>& a, size_t& p1s, size_t& p2s)
{
// Step 1
size_t p1 = 0;
size_t p2 = N;
int d = 2 * a.count() - N;
bool flip = false;
if (d == 0) {
p1s = 0;
p2s = N;
return;
}
if (d < 0) {
flip = true;
d = -d;
a.flip();
}
// Step 2
bool next = true;
while (d > 0) {
if (p2 < N) {
next = a[p2];
}
--d;
--p2;
if (a[p2] == false) {
if (p2+1 < N) {
a[p2+1] = false;
}
int dd = 2;
while (dd > 0) {
dd += (a[--p2]? -1: 1);
}
a[p2+1] = next;
a[p2] = false;
}
}
// Step 3
p2s = p2;
p1s = p1;
do {
// Step 4
if (a[p2] == false) {
a[p2++] = true;
bool nextToRestore = a[p2];
a[p2++] = true;
int dd = 2;
while (dd > 0 && p2 < N) {
dd += (a[p2++]? 1: -1);
}
if (dd == 0) {
a[--p2] = nextToRestore;
}
}
else {
++p2;
}
// Step 5
if (a[p1++] == false) {
int dd = 2;
while (dd > 0) {
dd += (a[p1++]? -1: 1);
}
}
// Step 6
if (p2 - p1 > p2s - p1s) {
p2s = p2;
p1s = p1;
}
} while (p2 < N);
if (flip) {
a.flip();
}
}
Sum all elements in the array, then diff = (array.length - sum) will be the difference in number of 0s and 1s.
If diff is equal to array.length/2, then the maximum subarray = array.
If diff is less than array.length/2 then there are more 1s than 0s.
If diff is greater than array.length/2 then there are more 0s than 1s.
For cases 2 & 3, initialize two pointers, start & end pointing to beginning and end of array. If we have more 1s, then move the pointers inward (start++ or end--) based on whether array[start] = 1 or array[end] = 1, and update sum accordingly. At each step check if sum = (end - start) / 2. If this condition is true, then start and end represent the bounds of your maximum subarray.
Here we end up doing two passes of the array, once to calculate sum, and once which moving the pointers inward. And we are using constant space as we just need to store sum and two index values.
If anyone wants to knock up some pseudocode, you're more than welcome :)
Here's an actionscript solution that looked like it was scaling O(n). Though it might be more like O(n log n). It definitely uses only O(1) memory.
Warning I haven't checked how complete it is. I could be missing some cases.
protected function findLongest(array:Array, start:int = 0, end:int = -1):int {
if (end < start) {
end = array.length-1;
}
var startDiff:int = 0;
var endDiff:int = 0;
var diff:int = 0;
var length:int = end-start;
for (var i:int = 0; i <= length; i++) {
if (array[i+start] == '1') {
startDiff++;
} else {
startDiff--;
}
if (array[end-i] == '1') {
endDiff++;
} else {
endDiff--;
}
//We can stop when there's no chance of equalizing anymore.
if (Math.abs(startDiff) > length - i) {
diff = endDiff;
start = end - i;
break;
} else if (Math.abs(endDiff) > length - i) {
diff = startDiff;
end = i+start;
break;
}
}
var bit:String = diff > 0 ? '1': '0';
var diffAdjustment:int = diff > 0 ? -1: 1;
//Strip off the bad vars off the ends.
while (diff != 0 && array[start] == bit) {
start++;
diff += diffAdjustment;
}
while(diff != 0 && array[end] == bit) {
end--;
diff += diffAdjustment;
}
//If we have equalized end. Otherwise recurse within the sub-array.
if (diff == 0)
return end-start+1;
else
return findLongest(array, start, end);
}
I would argue that it is impossible, that an algorithm with O(1) exists, in the following way. Assume you iterate ONCE over every bit. This requires a counter which needs the space of O(log n). Possibly one could argue that n itself is part of the problem instance, then you have as input length for a binary string of the length k: k + 2-log k. Regardless how you look over them you need an additional variable, on case you need an index into that array, that already makes it non O(1).
Usually you dont have this problem, because you have for an problem of the size n, an input of n numbers of the size log k, which adds up to nlog k. Here a variable of length log k is just O(1). But here our log k is just 1. So we can only introduce a help variable that has constant length (and I mean really constant, it must be limited regardless how big the n is).
Here one problem is the description of the problem comes visible. In computer theory you have to be very careful about your encoding. E.g. you can make NP problems polynomial if you switch to unary encoding (because then input size is exponential bigger than in a n-ary (n>1) encoding.
As for n the input has just the size 2-log n, one must be careful. When you speak in this case of O(n) - this is really an algorithm that is O(2^n) (This is no point we need to discuss about - because one can argue whether the n itself is part of the description or not).
I have this algorithm running in O(n) time and O(1) space.
It makes use of simple "shrink-then-expand" trick. Comments in codes.
public static void longestSubArrayWithSameZerosAndOnes() {
// You are given an array of 1's and 0's only.
// Find the longest subarray which contains equal number of 1's and 0's
int[] A = new int[] {1, 0, 1, 1, 1, 0, 0,0,1};
int num0 = 0, num1 = 0;
// First, calculate how many 0s and 1s in the array
for(int i = 0; i < A.length; i++) {
if(A[i] == 0) {
num0++;
}
else {
num1++;
}
}
if(num0 == 0 || num1 == 0) {
System.out.println("The length of the sub-array is 0");
return;
}
// Second, check the array to find a continuous "block" that has
// the same number of 0s and 1s, starting from the HEAD and the
// TAIL of the array, and moving the 2 "pointer" (HEAD and TAIL)
// towards the CENTER of the array
int start = 0, end = A.length - 1;
while(num0 != num1 && start < end) {
if(num1 > num0) {
if(A[start] == 1) {
num1--; start++;
}
else if(A[end] == 1) {
num1--; end--;
}
else {
num0--; start++;
num0--; end--;
}
}
else if(num1 < num0) {
if(A[start] == 0) {
num0--; start++;
}
else if(A[end] == 0) {
num0--; end--;
}
else {
num1--; start++;
num1--; end--;
}
}
}
if(num0 == 0 || num1 == 0) {
start = end;
end++;
}
// Third, expand the continuous "block" just found at step #2 by
// moving "HEAD" to head of the array and "TAIL" to the end of
// the array, while still keeping the "block" balanced(containing
// the same number of 0s and 1s
while(0 < start && end < A.length - 1) {
if(A[start - 1] == 0 && A[end + 1] == 0 || A[start - 1] == 1 && A[end + 1] == 1) {
break;
}
start--;
end++;
}
System.out.println("The length of the sub-array is " + (end - start + 1) + ", starting from #" + start + " to #" + end);
}
linear time, constant space. Let me know if there is any bug I missed.
tested in python3.
def longestBalancedSubarray(A):
lo,hi = 0,len(A)-1
ones = sum(A);zeros = len(A) - ones
while lo < hi:
if ones == zeros: break
else:
if ones > zeros:
if A[lo] == 1: lo+=1; ones-=1
elif A[hi] == 1: hi+=1; ones-=1
else: lo+=1; zeros -=1
else:
if A[lo] == 0: lo+=1; zeros-=1
elif A[hi] == 0: hi+=1; zeros-=1
else: lo+=1; ones -=1
return(A[lo:hi+1])

Find the first element in a sorted array that is greater than the target

In a general binary search, we are looking for a value which appears in the array. Sometimes, however, we need to find the first element which is either greater or less than a target.
Here is my ugly, incomplete solution:
// Assume all elements are positive, i.e., greater than zero
int bs (int[] a, int t) {
int s = 0, e = a.length;
int firstlarge = 1 << 30;
int firstlargeindex = -1;
while (s < e) {
int m = (s + e) / 2;
if (a[m] > t) {
// how can I know a[m] is the first larger than
if(a[m] < firstlarge) {
firstlarge = a[m];
firstlargeindex = m;
}
e = m - 1;
} else if (a[m] < /* something */) {
// go to the right part
// how can i know is the first less than
}
}
}
Is there a more elegant solution for this kind of problem?
One way of thinking about this problem is to think about doing a binary search over a transformed version of the array, where the array has been modified by applying the function
f(x) = 1 if x > target
0 else
Now, the goal is to find the very first place that this function takes on the value 1. We can do that using a binary search as follows:
int low = 0, high = numElems; // numElems is the size of the array i.e arr.size()
while (low != high) {
int mid = (low + high) / 2; // Or a fancy way to avoid int overflow
if (arr[mid] <= target) {
/* This index, and everything below it, must not be the first element
* greater than what we're looking for because this element is no greater
* than the element.
*/
low = mid + 1;
}
else {
/* This element is at least as large as the element, so anything after it can't
* be the first element that's at least as large.
*/
high = mid;
}
}
/* Now, low and high both point to the element in question. */
To see that this algorithm is correct, consider each comparison being made. If we find an element that's no greater than the target element, then it and everything below it can't possibly match, so there's no need to search that region. We can recursively search the right half. If we find an element that is larger than the element in question, then anything after it must also be larger, so they can't be the first element that's bigger and so we don't need to search them. The middle element is thus the last possible place it could be.
Note that on each iteration we drop off at least half the remaining elements from consideration. If the top branch executes, then the elements in the range [low, (low + high) / 2] are all discarded, causing us to lose floor((low + high) / 2) - low + 1 >= (low + high) / 2 - low = (high - low) / 2 elements.
If the bottom branch executes, then the elements in the range [(low + high) / 2 + 1, high] are all discarded. This loses us high - floor(low + high) / 2 + 1 >= high - (low + high) / 2 = (high - low) / 2 elements.
Consequently, we'll end up finding the first element greater than the target in O(lg n) iterations of this process.
Here's a trace of the algorithm running on the array 0 0 1 1 1 1.
Initially, we have
0 0 1 1 1 1
L = 0 H = 6
So we compute mid = (0 + 6) / 2 = 3, so we inspect the element at position 3, which has value 1. Since 1 > 0, we set high = mid = 3. We now have
0 0 1
L H
We compute mid = (0 + 3) / 2 = 1, so we inspect element 1. Since this has value 0 <= 0, we set mid = low + 1 = 2. We're now left with L = 2 and H = 3:
0 0 1
L H
Now, we compute mid = (2 + 3) / 2 = 2. The element at index 2 is 1, and since 1 ≥ 0, we set H = mid = 2, at which point we stop, and indeed we're looking at the first element greater than 0.
You can use std::upper_bound if the array is sorted (assuming n is the size of array a[]):
int* p = std::upper_bound( a, a + n, x );
if( p == a + n )
std::cout << "No element greater";
else
std::cout << "The first element greater is " << *p
<< " at position " << p - a;
After many years of teaching algorithms, my approach for solving binary search problems is to set the start and the end on the elements, not outside of the array. This way I can feel what's going on and everything is under control, without feeling magic about the solution.
The key point in solving binary search problems (and many other loop-based solutions) is a set of good invariants. Choosing the right invariant makes problem-solving a cake. It took me many years to grasp the invariant concept although I had learned it first in college many years ago.
Even if you want to solve binary search problems by choosing start or end outside of the array, you can still achieve it with a proper invariant. That being said, my choice is stated above to always set a start on the first element and end on the last element of the array.
So to summarize, so far we have:
int start = 0;
int end = a.length - 1;
Now the invariant. The array right now we have is [start, end]. We don't know anything yet about the elements. All of them might be greater than the target, or all might be smaller, or some smaller and some larger. So we can't make any assumptions so far about the elements. Our goal is to find the first element greater than the target. So we choose the invariants like this:
Any element to the right of the end is greater than the target. Any
element to the left of the start is smaller than or equal to the
target.
We can easily see that our invariant is correct at the start (ie before going into any loop). All the elements to the left of the start (no elements basically) are smaller than or equal to the target, same reasoning for the end.
With this invariant, when the loop finishes, the first element after the end will be the answer (remember the invariant that the right side of the end are all greater than the target?). So answer = end + 1.
Also, we need to note that when the loop finishes, the start will be one more than the end. ie start = end + 1. So equivalently we can say start is the answer as well (invariant was that anything to the left of the start is smaller than or equal to the target, so start itself is the first element larger than the target).
So everything being said, here is the code.
public static int find(int a[], int target) {
int st = 0;
int end = a.length - 1;
while(st <= end) {
int mid = (st + end) / 2; // or elegant way of st + (end - st) / 2;
if (a[mid] <= target) {
st = mid + 1;
} else { // mid > target
end = mid - 1;
}
}
return st; // or return end + 1
}
A few extra notes about this way of solving binary search problems:
This type of solution always shrinks the size of subarrays by at least 1. This is obvious in the code. The new start or end are either +1 or -1 in the mid. I like this approach better than including the mid in both or one side, and then reason later why the algo is correct. This way it's more tangible and more error-free.
The condition for the while loop is st <= end. Not st < end. That means the smallest size that enters the while loop is an array of size 1. And that totally aligns with what we expect. In other ways of solving binary search problems, sometimes the smallest size is an array of size 2 (if st < end), and honestly I find it much easier to always address all array sizes including size 1.
So hope this clarifies the solution for this problem and many other binary search problems. Treat this solution as a way to professionally understand and solve many more binary search problems without ever wobbling whether the algorithm works for edge cases or not.
How about the following recursive approach:
public static int minElementGreaterThanOrEqualToKey(int A[], int key,
int imin, int imax) {
// Return -1 if the maximum value is less than the minimum or if the key
// is great than the maximum
if (imax < imin || key > A[imax])
return -1;
// Return the first element of the array if that element is greater than
// or equal to the key.
if (key < A[imin])
return imin;
// When the minimum and maximum values become equal, we have located the element.
if (imax == imin)
return imax;
else {
// calculate midpoint to cut set in half, avoiding integer overflow
int imid = imin + ((imax - imin) / 2);
// if key is in upper subset, then recursively search in that subset
if (A[imid] < key)
return minElementGreaterThanOrEqualToKey(A, key, imid + 1, imax);
// if key is in lower subset, then recursively search in that subset
else
return minElementGreaterThanOrEqualToKey(A, key, imin, imid);
}
}
public static int search(int target, int[] arr) {
if (arr == null || arr.length == 0)
return -1;
int lower = 0, higher = arr.length - 1, last = -1;
while (lower <= higher) {
int mid = lower + (higher - lower) / 2;
if (target == arr[mid]) {
last = mid;
lower = mid + 1;
} else if (target < arr[mid]) {
higher = mid - 1;
} else {
lower = mid + 1;
}
}
return (last > -1 && last < arr.length - 1) ? last + 1 : -1;
}
If we find target == arr[mid], then any previous element would be either less than or equal to the target. Hence, the lower boundary is set as lower=mid+1. Also, last is the last index of 'target'. Finally, we return last+1 - taking care of boundary conditions.
My implementation uses condition bottom <= top which is different from the answer by templatetypedef.
int FirstElementGreaterThan(int n, const vector<int>& values) {
int B = 0, T = values.size() - 1, M = 0;
while (B <= T) { // B strictly increases, T strictly decreases
M = B + (T - B) / 2;
if (values[M] <= n) { // all values at or before M are not the target
B = M + 1;
} else {
T = M - 1;// search for other elements before M
}
}
return T + 1;
}
Hhere is a modified binary search code in JAVA with time complexity O(logn) that :
returns index of element to be searched if element is present
returns index of next greater element if searched element is not present in array
returns -1 if an element greater than the largest element of array is searched
public static int search(int arr[],int key) {
int low=0,high=arr.length,mid=-1;
boolean flag=false;
while(low<high) {
mid=(low+high)/2;
if(arr[mid]==key) {
flag=true;
break;
} else if(arr[mid]<key) {
low=mid+1;
} else {
high=mid;
}
}
if(flag) {
return mid;
}
else {
if(low>=arr.length)
return -1;
else
return low;
//high will give next smaller
}
}
public static void main(String args[]) throws IOException {
BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
//int n=Integer.parseInt(br.readLine());
int arr[]={12,15,54,221,712};
int key=71;
System.out.println(search(arr,key));
br.close();
}
kind =0 : exact match
kind=1 : just grater than x
kind=-1 : just smaller than x;
It returns -1 if no match is found.
#include <iostream>
#include <algorithm>
using namespace std;
int g(int arr[], int l , int r, int x, int kind){
switch(kind){
case 0: // for exact match
if(arr[l] == x) return l;
else if(arr[r] == x) return r;
else return -1;
break;
case 1: // for just greater than x
if(arr[l]>=x) return l;
else if(arr[r]>=x) return r;
else return -1;
break;
case -1: // for just smaller than x
if(arr[r]<=x) return r;
else if(arr[l] <= x) return l;
else return -1;
break;
default:
cout <<"please give "kind" as 0, -1, 1 only" << ednl;
}
}
int f(int arr[], int n, int l, int r, int x, int kind){
if(l==r) return l;
if(l>r) return -1;
int m = l+(r-l)/2;
while(m>l){
if(arr[m] == x) return m;
if(arr[m] > x) r = m;
if(arr[m] < x) l = m;
m = l+(r-l)/2;
}
int pos = g(arr, l, r, x, kind);
return pos;
}
int main()
{
int arr[] = {1,2,3,5,8,14, 22, 44, 55};
int n = sizeof(arr)/sizeof(arr[0]);
sort(arr, arr+n);
int tcs;
cin >> tcs;
while(tcs--){
int l = 0, r = n-1, x = 88, kind = -1; // you can modify these values
cin >> x;
int pos = f(arr, n, l, r, x, kind);
// kind =0: exact match, kind=1: just grater than x, kind=-1: just smaller than x;
cout <<"position"<< pos << " Value ";
if(pos >= 0) cout << arr[pos];
cout << endl;
}
return 0;
}

Project Euler 7 Scala Problem

I was trying to solve Project Euler problem number 7 using scala 2.8
First solution implemented by me takes ~8 seconds
def problem_7:Int = {
var num = 17;
var primes = new ArrayBuffer[Int]();
primes += 2
primes += 3
primes += 5
primes += 7
primes += 11
primes += 13
while (primes.size < 10001){
if (isPrime(num, primes)) primes += num
if (isPrime(num+2, primes)) primes += num+2
num += 6
}
return primes.last;
}
def isPrime(num:Int, primes:ArrayBuffer[Int]):Boolean = {
// if n == 2 return false;
// if n == 3 return false;
var r = Math.sqrt(num)
for (i <- primes){
if(i <= r ){
if (num % i == 0) return false;
}
}
return true;
}
Later I tried the same problem without storing prime numbers in array buffer. This take .118 seconds.
def problem_7_alt:Int = {
var limit = 10001;
var count = 6;
var num:Int = 17;
while(count < limit){
if (isPrime2(num)) count += 1;
if (isPrime2(num+2)) count += 1;
num += 6;
}
return num;
}
def isPrime2(n:Int):Boolean = {
// if n == 2 return false;
// if n == 3 return false;
var r = Math.sqrt(n)
var f = 5;
while (f <= r){
if (n % f == 0) {
return false;
} else if (n % (f+2) == 0) {
return false;
}
f += 6;
}
return true;
}
I tried using various mutable array/list implementations in Scala but was not able to make solution one faster. I do not think that storing Int in a array of size 10001 can make program slow. Is there some better way to use lists/arrays in scala?
The problem here is that ArrayBuffer is parameterized, so what it really stores are references to Object. Any reference to an Int is automatically boxed and unboxed as needed, which makes it very slow. It is incredibly slow with Scala 2.7, which uses a Java primitive to do that, which does it very slowly. Scala 2.8 takes another approach, making it faster. But any boxing/unboxing will slow you down. Furthermore, you are first looking up the ArrayBuffer in the heap, and then looking up again for java.lang.Integer containing the Int -- two memory accesses, which makes it way slower than your other solution.
When Scala collections become specialized, it should be plenty faster. Whether it should be enough to beat your second version or not, I don't know.
Now, what you may do to get around that is to use Array instead. Because Java's Array are not erased, you avoid the boxing/unboxing.
Also, when you use for-comprehensions, your code is effectively stored in a method which is called for each element. So you are also making many method calls, which is another reason this is slower. Alas, someone wrote a plugin for Scala which optimizes at least one case of for-comprehensions to avoid that.
Using Array should make it work in about zero seconds with the right algorithm. This, for example, takes about 7 milliseconds on my system:
class Primes(bufsize: Int) {
var n = 1
val pbuf = new Array[Int](bufsize max 1)
pbuf(0) = 2
def isPrime(num: Int): Boolean = {
var i = 0
while (i < n && pbuf(i)*pbuf(i) <= num) {
if (num % pbuf(i) == 0) return false
i += 1
}
if (pbuf(i)*pbuf(i) < num) {
i = pbuf(i)
while (i*i <= num) {
if (num % i == 0) return false
i += 2
}
}
return true;
}
def fillBuf {
var i = 3
n = 1
while (n < bufsize) {
if (isPrime(i)) { pbuf(n) = i; n += 1 }
i += 2
}
}
def lastPrime = { if (n<bufsize) fillBuf ; pbuf(pbuf.length-1) }
}
object Primes {
def timedGet(num: Int) = {
val t0 = System.nanoTime
val p = (new Primes(num)).lastPrime
val t1 = System.nanoTime
(p , (t1-t0)*1e-9)
}
}
Result (on second call; first has some overhead):
scala> Primes.timedGet(10001)
res1: (Int, Double) = (104743,0.00683394)
I think you have to think out of the box :)
Because the problem is manageable, you can use Sieve of Eratosthenes to solve it very efficiently.
Here's a recursive solution (using the isPrime function from your first solution). It seems to be good Scala style to prefer immutability (i.e. to try not to use vars) so I've done that here (in fact there are no vars or vals!). I don't have a Scala installation here though so can't tell if this is actually any quicker!
def problem_7:Int = {
def isPrime_(n: Int) = (n % 6 == 1 || n % 6 == 5) && isPrime(n)
def process(n: Int, acc: List[Int]): Int = {
if (acc.size == 10001) acc.head
else process(n+1, if isPrime_(n) n :: acc else acc)
}
process(1, Nil)
}

The Most Efficient Algorithm to Find First Prefix-Match From a Sorted String Array?

Input:
1) A huge sorted array of string SA;
2) A prefix string P;
Output:
The index of the first string matching the input prefix if any.
If there is no such match, then output will be -1.
Example:
SA = {"ab", "abd", "abdf", "abz"}
P = "abd"
The output should be 1 (index starting from 0).
What's the most algorithm way to do this kind of job?
If you only want to do this once, use binary search, if on the other hand you need to do it for many different prefixes but on the same string array, building a radix tree can be a good idea, after you've built the tree each look up will be very fast.
This is just a modified bisection search:
Only check as many characters in each element as are in the search string; and
If you find a match, keep searching backwards (either linearly or by further bisection searches) until you find a non-matching result and then return the index of the last matching result.
It can be done in linear time using a Suffix Tree. Building the suffix tree takes linear time.
The FreeBSD kernel use a Radix tree for its routing table, you should check that.
Here is a possible solution (in Python), which has O(k.log(n)) time complexity and O(1) additional space complexity (considering n strings and k prefix length).
The rationale behind it to perform a binary search which only considers a given character index of the strings. If these are present, continue to the next character index. If any of the prefix characters cannot be found in any string, it returns immediately.
from typing import List
def first(items: List[str], prefix: str, i: int, c: str, left: int, right: int):
result = -1
while left <= right:
mid = left + ((right - left) // 2)
if ( i >= len(items[mid]) ):
left = mid + 1
elif (c < items[mid][i]):
right = mid - 1
elif (c > items[mid][i]):
left = mid + 1
else:
result = mid
right = mid - 1
return result
def last(items: List[str], prefix: str, i: int, c: str, left: int, right: int):
result = -1
while left <= right:
mid = left + ((right - left) // 2)
if ( i >= len(items[mid]) ):
left = mid + 1
elif (c < items[mid][i]):
right = mid - 1
elif (c > items[mid][i]):
left = mid + 1
else:
result = mid
left = mid + 1
return result
def is_prefix(items: List[str], prefix: str):
left = 0
right = len(items) - 1
for i in range(len(prefix)):
c = prefix[i]
left = first(items, prefix, i, c, left, right)
right = last(items, prefix, i, c, left, right)
if (left == -1 or right == -1):
return False
return True
# Test cases
a = ['ab', 'abjsiohjd', 'abikshdiu', 'ashdi','abcde Aasioudhf', 'abcdefgOAJ', 'aa', 'aaap', 'aas', 'asd', 'bbbbb', 'bsadiojh', 'iod', '0asdn', 'asdjd', 'bqw', 'ba']
a.sort()
print(a)
print(is_prefix(a, 'abcdf'))
print(is_prefix(a, 'abcde'))
print(is_prefix(a, 'abcdef'))
print(is_prefix(a, 'abcdefg'))
print(is_prefix(a, 'abcdefgh'))
print(is_prefix(a, 'abcde Aa'))
print(is_prefix(a, 'iod'))
print(is_prefix(a, 'ZZZZZZiod'))
This gist is available at https://gist.github.com/lopespm/9790d60492aff25ea0960fe9ed389c0f
My current solution in mind is, instead of to find the "prefix", try to find a "virtual prefix".
For example, prefix is “abd", try to find a virtual-prefix “abc(255)". (255) just represents the max char number. After locating the "abc(255)". The next word should be the first word matching "abd" if any.
Are you in the position to precalculate all possible prefixes?
If so, you can do that, then use a binary search to find the prefix in the precalculated table. Store the subscript to the desired value with the prefix.
My solution:
Used binary search.
private static int search(String[] words, String searchPrefix) {
if (words == null || words.length == 0) {
return -1;
}
int low = 0;
int high = words.length - 1;
int searchPrefixLength = searchPrefix.length();
while (low <= high) {
int mid = low + (high - low) / 2;
String word = words[mid];
int compare = -1;
if (searchPrefixLength <= word.length()) {
compare = word.substring(0, searchPrefixLength).compareTo(searchPrefix);
}
if (compare == 0) {
return mid;
} else if (compare > 0) {
high = mid - 1;
} else {
low = mid + 1;
}
}
return -1;
}

Resources