You have an array of integers. you have to find the number of subarrays which mean (sum of those elements divided by the count of those elements) rounds to zero.
I have solved this with O(n^2) time but it is not efficient enough. Is there a way to do it?
example:
[-1, 1, 5, 4]
subarrays which mean rounds to zero are:
[-1, 1] = 0 , [-1, 1, 5, -4] = 1/4 which rounds to zero
Denote new array composed of pairs (prefix sum, cnt) where first element is the prefix summation and second element is number of elements, for example,
int[] arr = [-1, 1, 5 ,4]:
int[] narr = [(0, 0), (-1, 1), (0, 2), (5, 3), (9, 4)]
the question is converted to count pair (i, j) in narr where i < j and Math.abs(narr[j][0] - narr[i][0]) < narr[j][1] - narr[i][1] = j - i which is further boiled down to:
narr[j][0] - j < narr[i][0] - i < narr[i][0] + i < narr[j][0] + j
so the question is further converted to the following question:
for some intervals: [[1, 2], [-1, 0], ...] (initially is empty), given an interval [x, y], count how many intervals are totally within the range of [x, y], then we add this interval, and repeat this procedure for total N times. (how to manage the data structure of intervals become the key problem)
If we just brute force iterate every intervals and do the validation, the query time complexity is O(N) and insertion time complexity is O(1), total O(N^2)
If we use square decomposition, the query time complexity is O(sqrt(N)) and insertion time complexity is O(1) , total O(Nsqrt(N))
If we use treap (using first or second as priority, use another as key), the average total time complexity we can achieve is O(NlgN)
If you don't know the technique of square decomposition or treap , I suggest you reading couple of articles first.
Update:
After carefully 30 mins thinking, I find treap cannot achieve O(NlgN) average time complexity.
Instead we can use 2d segment tree to achieve O(NlgNlgN):
Please read this article instead:
2d segment tree
Related
Given an array A of size N, for each element A[i], I want to find all j such that A[j] > A[i]. Currently, I cannot think of a method better than O(i)(Traverse all 0 <= j < i and check). Is there an algorithm that achieves better time complexity than above? Space Complexity can be O(N)
Update 1
We can assume the array has distinct elements
Update 2
Let us consider array A = [4 6 7 1 2 3 5]. Let dom(i) = {j | 0 < j < i such that A[j] > A[i] }
dom(0) = empty
dom(1) = empty
dom(2) = empty
dom(3) = {0, 1, 2}
dom(4) = {0, 1, 2}
dom(5) = {0, 1, 2}
dom(6) = {1, 2}
Also the space complexity of O(N) is meant to be per every iteration i
Lower time complexity cannot be achieved, as, for example, if your array is all descending, all lists would have quadratic total length, so if your task is to get the lists, this is as fast as it can get. Your solution already achieves this O(N^2), so it already is optimal.
There are faster ways to calculate some things related to this, though. For example, if you are actually looking to get just the total count of all such pairs, it can be done in O(n ln n) time, see this post.
So I have an array 'a0' of size let's say 105, and now I have to make some changes in this array. The ith change could be calculated using a function f(ai-1) to give ai in O(1) time, Where aj denotes array 'a' after jth change has been made to it. Meaning that ai could be calculated if we know ai-1 in constant time. I know that I have to make 105 changes beforehand.
Now the problem asks me to answer large number of queries such as ai[p]-aj[q], where ax[y], represents yth element of the array after xth change has been made to the array a0.
Now if I had space of the order of 1010, I could easily solve this problem in O(1) by storing all the 105 arrays beforehand but I don't (generally) have that kind of space. And I could also answer these queries by each time generating ai and aj from scratch and answering the queries but I can't afford that kind of time complexity either, so I was wondering if I could monitor this problem using some data-structure.
EDIT: Example:
We define an array B= {1,3,1,4,2,6}, and we define aj as the array storing the frequency of ith number after jth element has been added to B. That is, a0={0,0,0,0,0,0} now a1={1,0,0,0,0,0}, a2={1,0,1,0,0,0}, a3={2,0,1,0,0,0} a4={2,0,1,1,0,0} a5={2,1,1,1,0,0} and a6={2,1,1,1,0,1}.
f(aj) just adds a an element to B and updates the value of aj-1.
Assume the number of changed elements per iteration is much smaller than the total number of elements. Store an array of lists, where the list elements are (i, new_value). For example if the full view is like this:
a0 = [3, 5, 1, 9]
a1 = [3, 5, 1, 8]
a2 = [1, 5, 1, 0]
We will store this:
c0 = [(0, 3), (2, 1)]
c1 = [(0, 5)]
c2 = [(0, 1)]
c3 = [(0, 9), (1, 8), (2, 0)]
Then for the query a2[0] - a1[3], we need only consult c0 and c3 (the two columns in the query). We can use binary search to locate the necessary indexes 2 and 1 (the keys for the binary search being the first elements of the tuples).
The query time is then O(log N) for the two binary searches, where N is the maximum number of changes to a single value in the array. The space is O(L + M), where L is the length of the original array and M is the total number of changes made.
If there is some a maximum number of states N, then checkpoints are a good way to go. For instance, if N=100,000, you might have:
c0 = [3, 5, 7, 1, ...]
c100 = [1, 4, 9, 8, ...]
c200 = [9, 7, 1, 2, ...]
...
c10000 = [1, 1, 4, 6, ...]
Now you have 1000 checkpoints. You can find the nearest checkpoint to an arbitrary state x in O(1) time and reconstruct x in at most 99 operations.
Riffing off of my comment on your question and John Zwinck's answer, if your mutating function f(*) is expensive and its effects are limited to only a few elements, then you could store the incremental changes. Doing so won't decrease the time complexity of the algorithm, but may reduce the run-time.
If you had unlimited space, you would just store all of the checkpoints. Since you do not, you'll have to balance the number of checkpoints against the incrementals appropriately. That will require some experimentation, probably centered around determining how expensive f(*) is and the extent of its effects.
Another option is to look at query behavior. If users tend to query the same or nearby locations repeatedly, you may be able to leverage an LRU (least-recently used) cache.
I have an unsorted array of size n and I need to find k-1 divisors so every subset is of the same size (like after the array is sorted).
I have seen this question with k-1=3. I guess I need the median of medians and this is will take o(n). But I think we should do it k times so o(nk).
I would like to understand why it would take o(n logk).
For example: I have an unsorted array with integers and I want find the k'th divisors which is the k-1 integers that split the array into k (same sized) subarrays according to their values.
If I have [1, 13, 6, 7, 81, 9, 10, 11] the 3=k dividers is [7 ,11] spliting to [1 6, 9 10 13 81] where every subset is big as 2 and equal.
You can use a divide-and-conquer approach. First, find the (k-1)/2th divider using the median-of-medians algorithm. Next, use the selected element to partition the list into two sub-lists. Repeat the algorithm on each sub-list to find the remaining dividers.
The maximum recursion depth is O(log k) and the total cost across all sub-lists at each level is O(n), so this is an O(n log k) algorithm.
Say I have an array of N integers set to the value '0', and I want to pick a random element of that array that has the value '0' and put it to value '1'
How do I do this efficiently ?
I came up with 2 solutions but they look quite ineficient
First solution
int array[N] //init to 0s
int n //number of 1s we want to add to the array
int i = 0
while i < n
int a = random(0, N)
if array[a] == 0
array[a] = 1
i++
end if
end while
It would be extremely inefficient for large arrays because of the probability of collision
The second involves a list containing all the 0 positions remaining and we choose a random number between 0 and the number of 0 remaining to lookup the value in the list that correspond to the index in the array.
It's a lot more reliable than the first solution, since the number of operations is bounded, but still has a worst case scenario complexity of N² if we want to fill the array completely
Your second solution is actually a good start. I assume that it involves rebuilding the list of positions after every change, which makes it O(N²) if you want to fill the whole array. However, you don't need to rebuild the list every time. Since you want to fill the array anyway, you can just use a random order and choose the remaining positions accordingly.
As an example, take the following array (size 7 and not initially full of zeroes) : [0, 0, 1, 0, 1, 1, 0]
Once you have built the list of zeros positions, here [0, 1, 3, 6], just shuffle it to get a random ordering. Then fill in the array in the order given by the positions.
For example, if the shuffle gives [3, 1, 6, 0], then you can fill the array like so :
[0, 0, 1, 0, 1, 1, 0] <- initial configuration
[0, 0, 1, 1, 1, 1, 0] <- First, position 3
[0, 1, 1, 1, 1, 1, 0] <- Second, position 1
[0, 1, 1, 1, 1, 1, 1] <- etc.
[1, 1, 1, 1, 1, 1, 1]
If the array is initially filled with zeros, then it's even easier. Your initial list is the list of integers from 0 to N (size of the array). Shuffle it and apply the same process.
If you do not want to fill the whole array, you still need to build the whole list, but you can truncate it after shuffling it (which just means to stop filling the array after some point).
Of course, this solution requires that the array does not change between each step.
You can fill array with n ones and N-n zeros and make random shuffling.
Fisher-Yates shuffle has linear complexity:
for i from N−1 downto 1 do
j ← random integer such that 0 ≤ j ≤ i
exchange a[j] and a[i]
Does anyone know an Algorithm that sorts k-approximately an array?
We were asked to find and Algorithm for k-approximate sorting, and it should run in O(n log(n/k)). but I can't seem to find any.
K-approx. sorting means that an array and any 1 <= i <= n-k such that sum a[j] <= sum a[j] i<=j<= i+k-1 i+1<=j<= i+k
I know I'm very late to the question ... But under the assumption that k is some approximation value between 0 and 1 (when 0 is completely unsorted and 1 is perfectly sorted) surely the answer to this is quicksort (or mergesort).
Consider the following array:
[4, 6, 9, 1, 10, 8, 2, 7, 5, 3]
Let's say this array is 'unsorted' - now apply one iteration of quicksort to this array with the (length[array]/2)th element as a pivot: length[array]/2 = 5. So the 5th element is our pivot (i.e. 8):
[4, 6, 2, 1, 3, 9, 7, 10, 8]
Now this is array is not sorted - but it is more sorted than one iteration ago, i.e. its approximately sorted but for a low approximation, i.e. a low value of k. Repeat this step again on the two halves of the array and it becomes more sorted. As k increases towards 1 - i.e. perfectly sorted - the complexity becomes O(N log(N/1)) = O(N log(N)).