Inserting unknown number of elements into dynamic array in linear time - arrays

(This question is inspired by deque::insert() at index?, I was surprised that it wasn't covered in my algorithm lecture and that I also didn't find it mentioned in another question here and even not in Wikipedia :). I think it might be of general interest and I will answer it myself ...)
Dynamic arrays are datastructures that allow addition of elements at the end in amortized constant time O(1) (by doubling the size of the allocated memory each time it needs to grow, see Amortized time of dynamic array for a short analysis).
However, insertion of a single element in the middle of the array takes linear time O(n), since in the worst case (i.e. insertion at first position) all other elements needs to be shifted by one.
If I want to insert k elements at a specific index in the array, the naive approach of performit the insert operation k times would thus lead to a complexity of O(n*k) and, if k=O(n), to a quadratic complexity of O(n²).
If I know k in advance, the solution is quite easy: Expand the array if neccessary (possibly reallocating space), shift the elements starting at the insertion point by k and simply copy the new elements.
But there might be situations, where I do not know the number of elements I want to insert in advance: For example I might get the elements from a stream-like interface, so I only get a flag when the last element is read.
Is there a way to insert multiple (k) elements, where k is not known in advance, into a dynamic array at consecutive positions in linear time?

In fact there is a way and it is quite simple:
First append all k elements at the end of the array. Since appending one element takes O(1) time, this will be done in O(k) time.
Second rotate the elements into place. If you want to insert the elements at position index. For this you need to rotate the subarray A[pos..n-1] by k positions to the right (or n-pos-k positions to the left, which is equivalent). Rotation can be done in linear time by use of a reverse operation as explained in Algorithm to rotate an array in linear time. Thus the time needed for rotation is O(n).
Therefore the total time for the algorithm is O(k)+O(n)=O(n+k). If the number of elements to be inserted is in the order of n (k=O(n)), you'll get O(n+n)=O(2n)=O(n) and thus linear time.

You could simply allocate a new array of length k+n and insert the desired elements linearly.
newArr = new T[k + n];
for (int i = 0; i < k + n; i++)
newArr[i] = i <= insertionIndex ? oldArr[i]
: i <= insertionIndex + k ? toInsert[i - insertionIndex - 1]
: oldArr[i - k];
return newArr;
Each iteration takes constant time, and it runs k+n times, thus O(k+n) (or, O(n) if you so like).

Related

Testing whether or not an array is distinct in O(N) time and O(1) extra space - is it possible?

So I found this purported interview question(1), that looks something like this
Given an array of length n of integers with unknown range, find in O(n) time and O(1) extra space whether or not it contains any duplicate terms.
There are no additional conditions and restrictions given. Assume that you can modify the original array. If it helps, you can restrict the datatype of the integers to ints (the original wording was a bit ambiguous) - although try not to use a variable with 2^(2^32) bits to represent a hash map.
I know there is a solution for a similar problem, where the maximum integer in the array is restricted to n-1. I am aware that problems like
Count frequencies of all elements in array in O(1) extra space and O(n) time
Find the maximum repeating number in O(n) time and O(1) extra space
Algorithm to determine if array contains n…n+m?
exist and either have solutions, or answers saying that it is impossible. However, for 1. and 2. the problems are stronger than this one, and for 3. I'm fairly sure the solution offered there would require the additional n-1 constraint to be adapted for the task here.
So is there any solution to this, or is this problem unsolvable? If so, is there a proof that it is not solvable in O(n) time and O(1) extra space?
(1) I say purported - I can't confirm whether or not it is an actual interview question, so I can't confirm that anyone thought it was solvable in the first place.
We can sort integer arrays in O(N) time! Therefore, sort and run the well-known algorithm for adjacent distinct.
bool distinct(int array[], size_t n)
{
if (n > 0xFFFFFFFF)
return true; // Pigeonhole
else if (n > 0x7FFFFFFF)
radix_sort(array, n); // Yup O(N) sort
else
heapsort(array, n); // N is small enough that heapsort's O(N log (N)) is smaller than radix_sort's O(32N) after constant adjust
for (size_t i = 1; i < n; i++)
if (array[i] == array[i - 1])
return true;
return false;
}
You can do this in expected linear time by using the original array like a hash table...
Iterate through the array, and for each item, let item, index be the item and its index, and let hash(item) be a value in [0,n). Then:
If hash(item) == index, then just leave the item there and move on. Otherwise,
If item == array[hash(item)] then you've found a duplicate and you're all done. Otherwise,
If item < array[hash(item)] or hash(array[hash(item)]) != hash(item), then swap those and repeat with the new item at array[index]. Otherwise,
Leave the item and move on.
Now you can discard all the array elements where hash(item) == index. These are guaranteed to be the smallest items that hash to their target indexes, and they are guaranteed not to be duplicates.
Move all the remaining items to the front of the array and repeat with the new, smaller, subarray.
Each step takes O(N) time, and on average will remove some significant proportion of the remaining elements, leading to O(N) time overall. We can speed things up by taking advantage all the free slots we're creating in the array, but that doesn't improve the overall complexity.

Greatest element present on the right side of every element in an array

I have been given an array (of n elements) and i have to find the smallest element on the right side of each element which is greater than itself(current element).
For example :
Array = {8,20,9,6,15,31}
Output Array = {9,31,15,15,31,-1}
Is it possible to solve this in O(n).? I thought of traversing the array from the right side (starting from n-2) and building a balance binary search tree for the remaining elements, as searching in it for an element which is immediately greater than the current element would be O(logn) .
Hence time complexity would come out to be O(n*(log(n)).
Is there a better approach to this problem?
The problem you present is impossible to solve in O(n) time, since you can reduce sorting to it and thereby achieve sorting in O(n) time.
Say there exists an algorithm which solves the problem in O(n).
Let there be an element a.
The algorithm can also be used to find the smallest element to the left of and larger than a (by reversing the array before running the algorithm).
It can also be used to find the largest element to the right (or left) of and smaller than a (by negating the elements before running the algorithm).
So, after running the algorithm four times (in linear time), you know which elements should be to the right and to the left of each element. In order to construct the sorted array in linear time, you'd need to keep the indices of the elements instead of the values. You first find the smallest element by following your "larger-than pointers" in linear time, and then make another pass in the other direction to actually build the array.
Others have proved that it is impossible in general to solve in O(n).
However, it is possible to do in O(m) where m is the size of your largest element.
This means that in certain cases (e.g. if if your input array is known to be a permutation of the integers 1 up to n) then it is possible to do in O(n).
The code below shows the approach, built upon a standard method for computing the next greater element. (There is a good explanation of this method on geeks for geeks)
def next_greater_element(A):
"""Return an array of indices to the next strictly greater element, -1 if none exists"""
i=0
NGE=[-1]*len(A)
stack=[]
while i<len(A)-1:
stack.append(i)
while stack and A[stack[-1]]<A[i+1]:
x=stack.pop()
NGE[x]=i+1
i+=1
return NGE
def smallest_greater_element(A):
"""Return an array of smallest element on right side of each element"""
top = max(A) + 1
M = [-1] * top # M will contain the index of each element sorted by rank
for i,a in enumerate(A):
M[a] = i
N = next_greater_element(M) # N contains an index to the next element with higher value (-1 if none)
return [N[a] for a in A]
A=[8,20,9,6,15,31]
print smallest_greater_element(A)
The idea is to find the next element in size order with greater index. This next element will therefore be the smallest one appearing to the right.
This cannot be done in O(n), since we can reduce Element Distinctness Problem (which is known to be sovleable in Omega(nlogn) when comparisons based) to it.
First, let's do a little expansion to the problem, that does not influence its hardness:
I have been given an array (of n elements) and i have to find the
smallest element on the right side of each element which is greater/equals
than itself(current element).
The addition is we allow the element to be equal to it (and to the right), and not only strictly greater than1.
Now, Given an instance of element distinctness arr, run the algorithm for this problem, and look if there is any element i such that arr[i] == res[i], if there isn't answer "all distinct", otherwise: "not all distinct".
However, since Element Distinctness is Omega(nlogn) comparisons based, it makes this problem such as well.
(1)
One possible justification why adding equality is not making the problem more difficult is - assuming elements are integers, we can just add i/(n+1) to each element in the array, now for each two elements if arr[i] < arr[j], also arr[i] + i/(n+1) < arr[j] + j/(n+1), but if arr[i] = arr[j], then if i<j arr[i] + i/(n+1) < arr[j] + j/(n+1), and we can have the same algorithm solve the problem for equalities as well.

What is the lowest bound for the algorithm?

Let an algorithm which get unsorted array with the size of n. Let a number k<=n. The algorithm prints the k-smallest numbers from 1 to k (ascending). What is the lower bound for the algorithm (for every k)?
Omega(n)
Omega(k*logn)
Omega(n*logk)
Omega(n*logn)
#1,#2 Are both correct.
Now, from my understanding, if we want to find a lower-bound to an algorithm we need to look at the worst-case. If that the case, then obviously the worst-case is when k=n. We know that sorting an array is bounded by Omega(nlogn) so the right answer is #4.
Unfortunately, I am wrong and the right answer is #5.
Why?
It can be done in O(n + klogk).
Run selection algorithm to find the k smallest element - O(n)
Iterate and return the elements lower/equals k - O(n)
Another iteration might be needed in case of the array allows
duplicates, but it is still done in O(n)
Lastly, you need to sort these elements in O(klogk)
It is easy to see this solution is optimal - cannot get better than O(klogk) factor because otherwise for assigning k=n you could sort any array better, and a linear scan at least is a must to find the required elements to be printed.
Lets try with Linear time:
In order to find the k'th smallest element, we have to use "Randomized-Select" which has the average running time of O(n). And use that element as pivot for the quick sort.
Use Quick sort method to split the array[i] <= k and array[i]>k. This would take O(n) time
Take the unsorted left array[i]<=k (which has k elements) and do counting sort, which will obviously take O(k+K)
Finally the print operation will take O(k)
Total time = O(n)+O(k+K)+O(k) = O(n+k+K)
Here, k is the number of elements which are smaller or equal to K

Can we find mode of an array without hashmap in un sorted array in O(n) time

Can we find the mode of an array in O(n) time without using Additional O(n) space, nor Hash. Moreover the data is not sorted?
The problem is not easier then Element distinctness problem1 - so basically without the additional space - the problem's complexity is Theta(nlogn) at best (and since it can be done in Theta(nlogn) - it is ineed the case).
So basically - if you cannot use extra space for the hash table, best is sort and iterate, which is Theta(nlogn).
(1) Given an algorithm A that runs in O(f(n)) for this problem, it is easy to see that one can run A and then verify that the resulting element repeats more then once with an extra iteration to solve the element distinctness problem in O(f(n) + n).
Under the right circumstances, yes. Just for example, if your data is amenable to a radix sort, then you can sort with only constant extra space in linear time, followed by a linear scan through the sorted data to find the mode.
If your data requires comparison-based sorting, then I'm pretty sure O(N log N) is about as well as you can do in the general case.
Just count the frequencies. This is not O(n) space, it is O(k), with k being the number of distinct values in the range. This is actually constant space.
Time is clearly linear O(n)
//init
counts = array[k]
for i = 0 to k
counts[i] = 0
maxCnt = 0
maxVal = vals[0]
for val in vals
counts[val]++
if (counts[val] > maxCnt)
maxCnt = counts[val]
maxVal = val
The main problem here, is that while k may be a constant, it may also be very very huge. However, k could also be small. Regardless, this does properly answer your question, even if it isn't practical.

Find the minimum number of elements required so that their sum equals or exceeds S

I know this can be done by sorting the array and taking the larger numbers until the required condition is met. That would take at least nlog(n) sorting time.
Is there any improvement over nlog(n).
We can assume all numbers are positive.
Here is an algorithm that is O(n + size(smallest subset) * log(n)). If the smallest subset is much smaller than the array, this will be O(n).
Read http://en.wikipedia.org/wiki/Heap_%28data_structure%29 if my description of the algorithm is unclear (it is light on details, but the details are all there).
Turn the array into a heap arranged such that the biggest element is available in time O(n).
Repeatedly extract the biggest element from the heap until their sum is large enough. This takes O(size(smallest subset) * log(n)).
This is almost certainly the answer they were hoping for, though not getting it shouldn't be a deal breaker.
Edit: Here is another variant that is often faster, but can be slower.
Walk through elements, until the sum of the first few exceeds S. Store current_sum.
Copy those elements into an array.
Heapify that array such that the minimum is easy to find, remember the minimum.
For each remaining element in the main array:
if min(in our heap) < element:
insert element into heap
increase current_sum by element
while S + min(in our heap) < current_sum:
current_sum -= min(in our heap)
remove min from heap
If we get to reject most of the array without manipulating our heap, this can be up to twice as fast as the previous solution. But it is also possible to be slower, such as when the last element in the array happens to be bigger than S.
Assuming the numbers are integers, you can improve upon the usual n lg(n) complexity of sorting because in this case we have the extra information that the values are between 0 and S (for our purposes, integers larger than S are the same as S).
Because the range of values is finite, you can use a non-comparative sorting algorithm such as Pigeonhole Sort or Radix Sort to go below n lg(n).
Note that these methods are dependent on some function of S, so if S gets large enough (and n stays small enough) you may be better off reverting to a comparative sort.
Here is an O(n) expected time solution to the problem. It's somewhat like Moron's idea but we don't throw out the work that our selection algorithm did in each step, and we start trying from an item potentially in the middle rather than using the repeated doubling approach.
Alternatively, It's really just quickselect with a little additional book keeping for the remaining sum.
First, it's clear that if you had the elements in sorted order, you could just pick the largest items first until you exceed the desired sum. Our solution is going to be like that, except we'll try as hard as we can to not to discover ordering information, because sorting is slow.
You want to be able to determine if a given value is the cut off. If we include that value and everything greater than it, we meet or exceed S, but when we remove it, then we are below S, then we are golden.
Here is the psuedo code, I didn't test it for edge cases, but this gets the idea across.
def Solve(arr, s):
# We could get rid of worse case O(n^2) behavior that basically never happens
# by selecting the median here deterministically, but in practice, the constant
# factor on the algorithm will be much worse.
p = random_element(arr)
left_arr, right_arr = partition(arr, p)
# assume p is in neither left_arr nor right_arr
right_sum = sum(right_arr)
if right_sum + p >= s:
if right_sum < s:
# solved it, p forms the cut off
return len(right_arr) + 1
# took too much, at least we eliminated left_arr and p
return Solve(right_arr, s)
else:
# didn't take enough yet, include all elements from and eliminate right_arr and p
return len(right_arr) + 1 + Solve(left_arr, s - right_sum - p)
One improvement (asymptotically) over Theta(nlogn) you can do is to get an O(n log K) time algorithm, where K is the required minimum number of elements.
Thus if K is constant, or say log n, this is better (asymptotically) than sorting. Of course if K is n^epsilon, then this is not better than Theta(n logn).
The way to do this is to use selection algorithms, which can tell you the ith largest element in O(n) time.
Now do a binary search for K, starting with i=1 (the largest) and doubling i etc at each turn.
You find the ith largest, and find the sum of the i largest elements and check if it is greater than S or not.
This way, you would run O(log K) runs of the selection algorithm (which is O(n)) for a total running time of O(n log K).
eliminate numbers < S, if you find some number ==S, then solved
pigeon-hole sort the numbers < S
Sum elements highest to lowest in the sorted order till you exceed S.

Resources