Maximum sum subarray with the length smaller than k problem . O(n) - arrays

I have encountered this problem in an olympiad and im really struggling to find the solution, bc the hint is not really descriptive enough. Please help me.
Thank you!

Well first, we need an array of cumulative sums. Now finding the subarray that we want is just a question of finding the two points no more than k apart whose values have the largest difference.
To solve that second problem we will need a helper array of the minimum cumulative sums within the last k. Or more precisely to hold tuples of (value, position) And now here is pseudocode:
calculate cumulative sums array
initialize best subarray to be first choice
initialize minimum helper array to have just the first choice
for each position in cumulative sums array:
if first in helper array to here is better than best subarray:
update best subarray
while last in minimum helper array >= this value:
pop() last from minimum helper array
insert this at end of minimum helper array
if first in helper array is k before this:
remove first in helper array
This algorithm is O(n). To see that, first note that all array operations on the helper array are amortized O(1). And associated for each position we have at most the following operations (executed slightly out of order):
add value to running total for cumulative sum array
potentially update best subarray discovered so far
be compared with some later element and removed
(that happens on a later loop, but only once for this position)
be compared with an earlier element that is smaller than this
be inserted into helper array
potentially be removed from the start of the helper array.
Every one of those operations are O(1). Since there are n positions, the whole algorithm is O(n).

Related

Finding median in an unsorted array (restricted to use a subroutine that find the quarterly element in linear)

Just like any other median selection problem for an unsorted array but with extra restriction. we are required to use a provided subroutine/helper function, Quart(A,p,r), that finds the 1/4th ordered item in a given subarray in linear time. How can we use this helper function to find the median an array?
Further restriction:
1. Your solution must be performed in-place (no new
array can be created). In particular, one alternative solution would be
to extend the array to size m so that all the entries in A[n+1, ... ,m] =
1 and m > 2n. After this, you would be able to solve the median
problem in the original array with just one call to the quartile problem
in the extended array. With further restriction, this is not possible.
2. while running the algorithm you may temporarily change elements in the array, e.g., a SWAP changes elements. But, after the conclusion of your algorithm, all elements in the array must be the same as they were in the beginning (but just as in the randomized selection algorithm taught in class, they may be in a different order than they were originally).
Since you are not allowed to create new arrays, this means that you are only allowed to modify a small (constant) number of items.
Do a pass through the array and find the min and max.
Call Quart to find the quartile value
Iterate through the array and add (max - min) + 1 to all values below the quartile. This will move the bottom quarter of the values to the top
Call Quart again to find the quartile of the new values (which will be the median of the original values)
Iterate through the array and subtract (max - min) + 1 from all values greater than the max to return the array to its original state
You might need some additional rules to handle special cases e.g. if there are multiple values equal to the quartile.

Time Complexity in Sorted Array

Given an sorted array in descending order, what will be the time complexity to delete the minimum element from this array?
================================================
My take the minimum element will be at last position so O(n) to find it? or should I apply Binary search since array is sorted or simply O(1) to reach at the end?
It really depends on what you mean "delete from the array." You have an array, sorted in descending order, and you want to delete the minimum element.
Let's say that the array is [5, 4, 3, 2, 1]. You know how large the array is, so finding the minimum element is O(1). You just index a[a.length-1].
But what does it mean to delete the last element? You can easily replace the value there with a sentinel value to say that the position is no longer used, but then the next time you tried to get the minimal element, you'd have to visit a[a.length-1], and then do a reverse scan to find the first used element. That approaches O(n) as you remove more items.
Or do you keep a counter that tells you how many values in the array are actually used? So you'd have a variable, count, to tell you which is the last element. And when you wanted to get the minimal element you'd have:
smallest = a[count];
count = count-1;
That's still O(1), but it leaves unused items in the array.
But if you want to ensure that the array length always reflects the number of items are in it, then you have to re-allocate the array to be smaller. That is O(n).
So the answer to your question is "It depends."
Since the array is sorted in descending order, the minimum element will be always guaranteed to be in last location of array, assuming there are no duplicates in the array. Deletion can be done in O(1).
If you need to do a series of deletions on the array, then you may want to adjust the deleted indices and point to the correct end location of the array. This can be done in constant time.
Hence to sum it up, the total time complexity would be O(1)

Find the median of an unsorted array without sorting [duplicate]

This question already has answers here:
O(n) algorithm to find the median of n² implicit numbers
(3 answers)
Closed 7 years ago.
is there a way to find the Median of an unsorted array:
1- without sorting it.
2- without using the select algorithm, nor the median of medians
I found a lot of other questions similar to mine. But the solutions, most of them, if not all of them, discussed the SelectProblem and the MedianOfMedians
You can certainly find the median of an array without sorting it. What is not easy is doing that efficiently.
For example, you could just iterate over the elements of the array; for each element, count the number of elements less than and equal to it, until you find a value with the correct count. That will be O(n2) time but only O(1) space.
Or you could use a min heap whose size is just over half the size of the array. (That is, if the array has 2k or 2k+1 elements, then the heap should have k+1 elements.) Build the heap using the first array elements, using the standard heap building algorithm (which is O(N)). Then, for each remaining element x, if x is greater than the heap's minimum, replace the min element with x and do a SiftUp operation (which is O(log N)). At the end, the median is either the heap's minimum element (if the original array's size was odd) or is the average of the two smallest elements in the heap. So that's a total of O(n log n) time, and O(n) space if you cannot rearrange array elements. (If you can rearrange array elements, you can do this in-place.)
There is a randomized algorithm able to accomplish this task in O(n) steps (average case scenario), but it does involve sorting some subsets of the array. And, because of its random nature, there is no guarantee it will actually ever finish (though this unfortunate event should happen with vanishing probability).
I will leave the main idea here. For a more detailed description and for the proof of why this algorithm works, check here.
Let A be your array and let n=|A|. Lets assume all elements of A are distinct. The algorithm goes like this:
Randomly select t = n^(3/4) elements from A.
Let T be the "set" of the selected elements.Sort T.
Set pl = T[t/2-sqrt(n)] and pr = T[t/2+sqrt(n)].
Iterate through the elements of A and determine how many elements are less than pl (denoted by l) and how many are greater than pr (denoted by r). If l > n/2 or r > n/2, go back to step 1.
Let M be the set of elements in A in between pl and pr. M can be determined in step 4, just in case we reach step 5. If the size of M is no more than 4t, sort M. Otherwise, go back to step 1.
Return m = M[n/2-l] as the median element.
The main idea behind the algorithm is to obtain two elements (pl and pr) that enclose the median element (i.e. pl < m < pr) such that these two are very close one two each other in the ordered version of the array (and do this without actually sorting the array). With high probability, all the six steps only need to execute once (i.e. you will get pl and pr with these "good" properties from the first and only pass through step 1-5, so no going back to step 1). Once you find two such elements, you can simply sort the elements in between them and find the median element of A.
Step 2 and Step 5 do involve some sorting (which might be against the "rules" you've mysteriously established :p). If sorting a sub-array is on the table, you should use some sorting method that does this in O(slogs) steps, where s is the size of the array you are sorting. Since T and M are significantly smaller than A the sorting steps take "less than" O(n) steps. If it is also against the rules to sort a sub-array, then take into consideration that in both cases the sorting is not really needed. You only need to find a way to determine pl, pr and m, which is just another selection problem (with respective indices). While sorting T and M does accomplish this, you could use any other selection method (perhaps something rici suggested earlier).
A non-destructive routine selip() is described at http://www.aip.de/groups/soe/local/numres/bookfpdf/f8-5.pdf. It makes multiple passes through the data, at each stage making a random choice of items within the current range of values and then counting the number of items to establish the ranks of the random selection.

Finding Median in Three Sorted Arrays in O(logn)

By googling for minutes, I know the basic idea.
Let A,B,and C be sorted arrays containing n elements.
Pick median in each array and call them medA, medB, and medC.
Without loss of generality, suppose that medA > medB > medC.
The elements bigger than medA in array A cannot become the median of three arrays. Likewise, the elements smaller than medC in array C cannot, so such elements will be ignored.
Repeat steps 2-4 recursively.
My question is, what is the base case?
Assuming a lot of base cases, I tested the algorithm by hands for hours, but I was not able to find a correct base case.
Also, the lengths of three arrays will become different every recursive step. Does step 4 work even if the length of three arrays are different?
This algorithm works for two sorted arrays of same sizes but not three. After the one iteration, you eliminates half of the elements in A and C but leaves B unchanged, so the number of elements in these arrays are no longer the same, and the method no longer apply. For arrays of different sizes, if you apply the same method, you will be removing different number of elements from the lower half and upper half, therefore the median of the remaining elements is not the same as the median of the original arrays.
That being said, you can modify the algorithm to eliminate same number of elements at both end in each iteration, this could be in efficient when some of the arrays are very small and some are very large. You can also turn this into a question of finding the k-th element, track the number of elements being throw away and change value of k at each iteration. Either way this is much trickier than the two array situation.
There is another post talking about a general case: Median of 5 sorted arrays
I think you can use the selection algorithm, slightly modified to handle more arrays.
You're looking for the median, which is the p=[n/2]th element.
Pick the median of the largest array, find for that value the splitting point in the other two arrays (binary search, log(n)). Now you know that the selected number is the kth (k = sum of the positions).
If k > p, discard elements in the 3 arrays above it, if smaller, below it (discarding can be implemented by maintaing lower and upper indexes for each array, separately). If it was smaller, also update p = p - k.
Repeat until k=p.
Oops, I think this is log(n)^2, let me think about it...

Quickest way to find 5 largest values in an array of structs

I have an array of structs called struct Test testArray[25].
The Test struct contains a member called int size.
What is the fastest way to get another array of Test structs that contain all from the original excluding the 5 largest, based on the member size? WITHOUT modifying the original array.
NOTE: Amount of items in the array can be much larger, was just using this for testing and the values could be dynamic. Just wanted a slower subset for testing.
I was thinking of making a copy of the original testArray and then sorting that array. Then return an array of Test structs that did not contain the top 5 or bottom 5 (depending on asc or desc).
OR
Iterating through the testArray looking for the largest 5 and then making a copy of the original array excluding the largest 5. This way seems like it would iterate through the array too many times comparing to the array of 5 largest that had been found.
Follow up question:
Here is what i am doing now, let me know what you think?
Considering the number of largest elements i am interested in is going to remain the same, i am iterating through the array and getting the largest element and swapping it to the front of the array. Then i skip the first element and look for the largest after that and swap it into the second index... so on so forth. Until i have the first 5 largest. Then i stop sorting and just copy the sixth index to the end into a new array.
This way, no matter what, i only iterate through the array 5 times. And i do not have to sort the whole thing.
Partial Sorting with a linear time selection algorithm will do this in O(n) time, where sorting would be O(nlogn).
To quote the Partial Sorting page:
The linear-time selection algorithm described above can be used to find the k smallest or the k largest elements in worst-case linear time O(n). To find the k smallest elements, find the kth smallest element using the linear-time median-of-medians selection algorithm. After that, partition the array with the kth smallest element as pivot. The k smallest elements will be the first k elements.
You can find the k largest items in O(n), although making a copy of the array or an array of pointers to each element (smarter) will cost you some time as well, but you have to do that regardless.
If you'd like me to give a complete explanation of the algorithm involved, just comment.
Update:
Regarding your follow up question, which basically suggests iterating over the list five times... that will work. But it iterates over the list more times than you need to. Finding the k largest elements in one pass (using an O(n) selection algorithm) is much better than that. That way you iterate once to make your new array, and once more to do the selection (if you use median-of-medians, you will not need to iterate a third time to remove the five largest items as you can just split the working array into two parts based on where the 5th largest item is), rather than iterating once to make your new array and then an additional five times.
As stated sorting is O(nlogn +5) iterating in O(5n + 5). In the general case finding m largest numbers is O(nlog +m) using the sort algorithm and O(mn +m) in the iteration algoritm. The question of which algorithm is better depends on the values of m and n. For a value of five iterating is better for up to 2 to the 5th numbers I.e. a measly 32. However in terms of operations sorting is more intensive than iterating so it'll be quite a bit more until it is faster.
You can do better theoretically by using a sorted srray of the largest numbers so far and binary search to maintain the order that will give you O(nlogm) but that again depends on the values of n and m.
Maybe an array isn't the best structure for what you want. Specially since you need to sort it every time a new value is added. Maybe a linked list is better, with a sort on insert (which is O(N) on the worst case and O(1) in the best), then just discard the last five elements. Also, you have to consider that just switching a pointer is considerably faster than reallocating the entire array just get another element in there.
Why not an AVL Tree? Traverse time is O(log2N), but you have to consider the time of rebalancing the tree, and if the time spent coding that is worth it.
With usage of min-heap data structure and set heap size to 5, you can traverse the array and insert into heap when the minimum element of heap is less than the element in the array.
getMin takes O(1) time and insertion takes O(log(k)) time where k is the element size of heap (in our case it is 5). So in the worst case we have complexity O(n*log(k)) to find max 5 elements. Another O(n) will take to get the excluded list.

Resources