I have an array say A(0 indexed) of size 1.
I want to find minimum in array A between indexes k1 (k1>=0) and A.size()-1(i.e the last element).
Then I would insert the value : (minimum element in given range + some "random" constant) at the end of the array.Then I have another query to find minimum between indexes k2 and A.size()-1. I find that, insert the value : (minimum in the given range + another "random" constant) at the end. I have to do many such queries.
Say, I have N queries. Naive approach would take O(N^2).
Cannot use segment trees as array is not static. But, a clever way to do is make segment tree for size N+1 array; beforehand and fill the unknown values with infinity. This would give me O(Nlog N) complexity.
Is there any other method for NlogN complexity or even N?
There is absolutely no need to use advanced data structures like tree here. Just a simple local variable and list will do it all:
Create an empty list(say minList).
Start from the end index and go till the start index of the initially given array, put the minimum values (till that index from the end) at the front of the list(i.e. do push_front).
Lets say the provided array is:
70 10 50 40 60 90 20 30
So the resultant minList will be:
10 10 20 20 20 20 20 30
After doing that, you only need to keep track of the minimum among newly appended elements in the continuously modifying array(say, minElemAppended).
Lets say you get k = 5 and randomConstant = -10, then
minElemAppended = minimum(minList[k-1] + randomConstant, minElemAppended)
By adopting this approach,
You don't need to traverse the appended part of or even the initial given array.
You have option not to append the elements at all.
Time Complexity: O(N) to process N queries.
Space Complexity: O(N) to store the minList
Related
Let be an array that used for some data with size n. if there's an
overflow we make a new array, copy all the elements in old array to
the new and we just make the new array bigger by 1 extra.
If we have an array size of n and √n of first spots are occupied.
What's the time complexity for insert n new elements?
Since we already have √n element's in the n size array. There's (n - √n) left spots for the group of size n. We insert (n-√n) of them into the array and we left with √n elements to insert. The array is full. So we have to make new arrays √n times with one added extra spot.
Time complexity:
insert (n - √n) elements into origin array is just O(n).
Copy the array size n into new array size n+1. - O(n)
Insert the extra element. - O(1)
We have to do steps 2 and 3 √n times.
Total: O(n) + √nO(n) + √nO(1) = Θ(√n*n)
Is my answer correct?
Yes. It is correct.
One aspect that wasn't mentioned is that the 𝑛+1 in step 2 will only be exactly 𝑛+1 on the first execution, but will be 𝑛+2, 𝑛+3, ..., 𝑛+√𝑛 in the subsequent executions.
But still, if we would create an upper bound for the complexity by always assuming that this step is O(𝑛+√𝑛) for each execution, the outcome is the same (I exclude the O(𝑛) of the first phase here):
O(√𝑛(𝑛+√𝑛)) = O(𝑛√𝑛 + 𝑛) = O(𝑛√𝑛)
I recently came through an interesting coding problem, which is as follows:
There are n boxes, let us assume this is an array of n boxes.
For each index i of this array, three values are given -
1.) Weight(i)
2.) Left(i)
3.) Right(i)
left(i) means - if weight[i] is chosen, we are not allowed to choose left[i] elements from the left of this ith element.
Similarly, right[i] means if arr[i] is chosen, we are not allowed to choose right[i] elements from the right of it.
Example :
Weight[2] = 5
Left[2] = 1
Right[2] = 3
Then, if I pick element at position 2, I get weight of 5 units. But, I cannot pick elements at position {1} (due to left constraint). And cannot pick elements at position {3,4,5} (due to right constraint).
Objective - We need to calculate the maximum sum of the weights we can pick.
Sample Test Case :-
**Input: **
5
2 0 3
4 0 0
3 2 0
7 2 1
9 2 0
**Output: **
13
Note - First column is weights, Second column is left constraints, Third column is right constraints
I used Dynamic Programming approach(similar to Longest Increasing Subsequence) to reach a O(n^2) solution. But, not able to think of a O(n*logn) solution. (n can be up to 10^5.)
I also tried to use priority queue, in which elements with lower value of (right[i] + i) are given higher priority(assigned higher priority to element with lower value of "i", in case primary key value is equal). But, it is also giving timeout error.
Any other approach for this? or any optimization in priority queue method? I can post both of my codes if needed.
Thanks.
One approach is to use a binary indexed tree to create a data structure that makes it easy to do two operations in O(logn) time each:
Insert number into an array
Find maximum in a given range
We will use this data structure to hold the maximum weight that can be achieved by selecting box i along with an optimal selection of boxes to the left.
The key is that we will only insert values into this data structure when we reach a point where the right constraint has been met.
To find the best value for box i, we need to find the maximum value in the data structure for all points up to location i-left[i], which can be done in O(logn).
The final algorithm is to loop over i=0..n-1 and for each i:
Compute result for box i by finding maximum in range 0..(i-left[i])
Schedule the result to be added when we reach location i+right[i]
Add any previously scheduled results into our data structure
The final result is the maximum value in the whole data structure.
Overall, the complexity is o(nlogn) because each value of i results in one lookup and one update operation.
Let S be a set of n integers stored in an array (not necessarily sorted). Design an algorithm to find the 10 largest integers in S (by creating a separate array of length 10 storing those integers). Your algorithm must finish in O(n) time.
I thought I could maybe answer this by using count sort and then adding last 10 elements into the new array. But apparently this is wrong. Does anyone know a better way?
Method 1:
you can use FindMax() algorithm that find the max number in O(N) and if you use it 10 time :
10 * O(N) =O(N)
each time you find max num you put it in the new array and you will ignore it the next time you use FindMax();
Method 2:
you can use Bubble 10 times:
1) Modify Bubble Sort to run the outer loop at most 10 times.
2) Save the last 10 elements of the array obtained in step 1 to the new array.
10 * O(N) =O(N)
Method 3:
You can use MAX Heap:
1) Build a Max Heap in O(n)
2) Use Extract Max 10 times to get 10 maximum elements from the Max Heap 10
* O(logn)
O(N) + 10 * O(logN) = O(N)
Visit :
http://www.geeksforgeeks.org/k-largestor-smallest-elements-in-an-array/
They have mentioned six methods for this.
Use order statistic algorithm to find the 10th biggest element.
Next, iterate over the array to find all elements which are lesser/equal it.
TimeComplexity: O(n) for order statistic + O(n) for iterating the array once => O(n)
Insert them in a balanced binary tree. O(N) + O(lg2 N).
Given a large unsorted array, I need to find out the number of occurrences of a given number in a particular range. (There can be many queries)
e.g. if arr[]={ 6,7,8,3,4,1,2,4,6,7,8,9} and left_range=3 and right_range=7 and number=4, then the output will be 2. (considering a 0 indexed array)
arr[i] can be in the range of 1 to 100000. The array can have up to 100000 numbers.
Can you guide me about which data structure or algorithm I should use here?
PS: Pre-processing the array is allowed.
Here's a solution that doesn't require segment tree.
Preprocessing:
For each number arr[i], push i to the 2D vector(or ArrayList) with index arr[i].
Answering Queries:
For any query do a binary search on vector[num] to find the index of the maximum index of num in that vector that's less than or equal to right range, let's call it R. Then find the minimum index that's greater than or equal to left range, let's call it L. Print R - L + 1
Runtime:
Preprocessing in O(1) per item, taking total O(N) time.
Per Query answer: O(lg(N))
Space: Quite linear assuming vector or ArrayList
Given an array , each element is one more or one less than its preceding element .find an element in it.(better than O(n) approach)
I have a solution for this but I have no way to tell formally if it is the correct solution:
Let us assume we have to find n.
From the given index, find the distance to n; d = |a[0] - n|
The desired element will be atleast d elements apart and jump d elements
repeat above till d = 0
Yes, your approach will work.
If you can only increase / decrease by one at each following index, there's no way a value at an index closer than d could be a distance d from the current value. So there's no way you can skip over the target value. And, unless the value is found, the distance will always be greater than 0, thus you'll keep moving right. Thus, if the value exists, you'll find it.
No, you can't do better than O(n) in the worst case.
Consider an array 1,2,1,2,1,2,1,2,1,2,1,2 and you're looking for 0. Any of the 2's can be changed to a 0 without having to change any of the other values, thus we have to look at all the 2's and there are n/2 = O(n) 2's.
Prepocessing can help here.
Find Minimum and Maximum element of array in O(n) time complexity.
If element to be queried is between Minimum and Maximum of array, then that element is present in array, else that element is not present in that array.So any query will take O(1) time. If that array is queried multiple times, than amortized time complexity will be lesser that O(n).