When we implement dynamic array via repeated doubling we simply create a new array that is double the current array size and copy the previous elements and then add the new one? Correct?
So to compute the complexity we have 1 + 2 + 4 + 8 + .... number of steps? Correct?
But
1 + 2^1 + 2^2 + .... + 2^n = (2^(n-1) - 1) ~ O(2^n).
However it is given that
1 + 2 + 4 + ... + n/4 + n/2 + n ~ O(n).
Which one is correct? And why? Thanks
You're on the right track with your sum, but you have too many terms in it. :-)
The array will double in size when it reaches any size that's a power of two. Therefore, if the largest power of two encountered is 2k, the work done is
20 + 21 + 22 + ... + 2k
This is the sum of a geometric series, which works out to
20 + 21 + 22 + ... + 2k = 2k+1 - 1 = 2 · 2k - 1
In your analysis, you wrote out this summation as having n terms it, ranging up to 2n. This would be the right summation if your array had 2n elements in it, but that's exponentially too many. Rather, since your array has n total elements in it, the maximum term in this sum is 2lg n. Plugging that in gives
2 · 2lg n - 1 = 2n - 1 = Θ(n)
Therefore, the total work done comes out to Θ(n), not Θ(2n).
Hope this helps!
Related
Hello Everyone,
I am Abhiroop Singh new to the world of competitive programming, recently I came across a question for which i didn't have the faintest idea how to approach it. Being working on the approach for past two days please point me in the righ direction.
So, the questions was,
You are given an array A of N integers. You need to find two integers x and y such that the sum of the absolute difference between each element of the array to one of the two chosen integers is minimal
Task
Determine the minimum value of the expression function if the chosen numbers are x and y
Example
Assumptions
N=4
A = [2,3,6,7]
Approach
•You can choose the two integers, 3 and 7
•The required sum |2-3| + |3-3| + |6-7| + |7-7| = 1 + 0 + 1 + 0 = 2
Constraints
1<= T <= 100
2<= N <=5*10^3
1<= A[i] <=10^6
The sum of N over all test cases does not exceed 5*10^3
Sample input
2
3
1 3 5
4
3 2 5 11
Output
2
3
Explanation
The first line contains the number of test case. T = 2
The first test case
Given
• N = 3
• A = [1,3,5]
Approach
• You can choose the two integers 1 and 4 .
• The required sum = |1-1| + |3-4| + |5-4| = 0 + 1 + 1 = 2.
The second test case
Given
• N = 4
• A = [3, 2, 5, 11]
Approach
• You can choose the two integers, 3 and 11.
• The required sum = |2-3| + |3-3| + |5-3| + |11-11| = 1 + 0 + 2 + 0 = 3.
My approach-:
• First I tried finding median of the array.
• Secondly I tried applying binary search to find the two numbers
Both didn't worked. So, please help me.
given N elements of an array compute the sum (min*max) across all the subarrays of the array.
e.g.
N = 5
Array: 5 7 2 3 9
output: 346
(5*5 + 7*7 + 2*2 + 3*3 + 9*9 + 5*7 + 2*7 + 2*3 + 3*9 + 2*7+2*7 + 2*9 + 2*7 + 2*9 + 2*9)
here is the complete question
i cannot think of anything better than O(n^2). The editorial solution uses segment trees which i couldnt understand.
Hint regarding the editorial (the details of which I am uncertain about): if we can solve the problem for all the intervals that include both A[i] and A[i+1], where i divides A in half, in O(n) time, then we can solve the whole problem in O(n log n) time using divide and conquer, solving left and right separately, and adding to that the intervals that overlap both left and right.
input:
5 7 2|3 9
i (divides input in half)
Task: find solution in O(n) for all intervals that include 2 and 3.
5 7 2 3 9
xxx
2 2 -> prefix min
2 2 2 <- prefix min
2 4 -> prefix sum min
6 4 2 <- prefix sum min
3 9 -> prefix max
7 7 3 <- prefix max
Notice that because they are monotonically increasing, maxes can be counted as extending back to the next higher element into the opposite prefix. For example, we can find that the 7 extends back to 9 by extending pointers in either direction as we mark the current max. We then want to multiply each max by the relevent min prefix sum.
Relevant contributions as we extend pointers marking current max, and multiply max by the prefix sum min (remembering that intervals must span both 2 and 3):
3 * 2
7 * 2
7 * 2
9 * 6
These account for the following intervals:
5 7 2 3 9
---
-----
-------
-----
-------
---------
3*2 + 7*2 + 7*2 + 9*2 + 9*2 + 9*2
Now solve the problem for left and right separately and add. Doing this recursively is divide and conquer.
Suppose I have an array with range till n, say 11, that is,
U = 1 2 3 4 5 6 7 8 9 10 11
Now, I have an array-A (a sub-array of U):
1 3 4 9
and a array-B(another sub-array of U with nothing in common with A):
2 5 6 10
Note that all these 3 sets are sorted.
I have to calculate n(n+1)/2 for every (a[i+1]-a[i]-1) where i is the index of the array and a is the generalized array.
Also consider corner cases from both ends. They are to subtract 1 from first digit and then calculate n(n+1)/2 and to subtract last digit from 11 and then calculate n(n+1)/2.
For eg. For set A : We get
(3-1-1)* + (4-3-1)* + (9-4-1)* + Corner Cases
Here corner cases: (1-0)* + (11-9)*
x* means x(x+1)/2
Similarly for set B : We have (5-2-1)* + (6-5-1)* + (10-6-1)* + (2-1)* + (11-10)*
Now I have to calculate solution for (A U B) using set A and Set B in O(1) complexity. Is there a way to do this?
For O(N) complexity, I have just merge the two arrays and apply the above formula.
A U B : 1,2,3,4,5,6,9,10
Therefore solution = (9-6-1)*+ (11-10)*
Maybe you can use telescoping sum
(here's an exemple).
I think about it because at each loop (each index of the array) you substract a number you add in the previous loop :
u[2] - u[1] - 1 + u[3] - u[2] - 1 + u[4] - u[3] - 1 + ... + u[n] - u[n-1]- 1
which gives you :
- u[1] - 1-...-1 + u[n]
= u[n] - n - u[1]
Moreover, A and B have nothing in common that's why:
n = lenght(A) + length (B)
Additionaly, since you order your numbers:
u[1] = min ( A[1] , B[1] )
u[n] = max ( A[length[A]] , B[length[B]] )
So we have it :
Solution = max ( A[length[A]] , B[length[B]] ) - lenght(A) - length (B) - min ( A[1] , B[1] )
I hope I helped you on this problem. (Sorry if I made some mistakes in english).
WHAT will be time complexity of relation T(n)=nT(n-1)+n
in my prog something like this
f(int n)
{
c++;
if(n>0)
for(i=1;i<=n;i++)
f(n-1);
}
i took a counter to count how many time function called
it gives answer between n to n!
thanks.
Your code lacks the +n part of the recursion, so I assume that the code is wrong and the recursion
T(n) = n*T(n-1) + n
is correct.
Let f(n)=T(n)/n!, then
f(n) = T(n)/n! = n(T(n-1)+1)/n!
= T(n-1)/(n-1)! + 1/(n-1)!
= f(n-1) + 1/(n-1)!
= sum(1,n-1, 1/k!)
~ e
Thus T(n) ~ e*n!.
We can list out a few terms
T(0) = 0
T(1) = 1 * 0 + 1
T(2) = 2 * 1 + 2 = 4
T(3) = 3 * 4 + 3 = 15
T(4) = 4 * 15 + 4 = 64
...
We can note a couple of things. First, the function grows more quickly than n!; it starts out smaller than it (at n=0), catches up (at n=1) and surpasses it (at n>=2). So we know that a lower bound is n!.
Now, we need the upper bound. We can notice one thing: T(n) = nT(n-1) + n < nT(n-1) + nT(n-1) for all sufficiently large n (n >= 2, I think). But we can easily show that T(n) = nT(n-1) is a recurrence relation for n!, so we know that a recurrence relation for T(n) = nT(n-1) + nT(n-1) = 2nT(n-1) is (n!)(2^n). Can we do better?
I propose that we can. We can show that for any c > 0, T(n) = nT(n-1) + n < nT(n-1) + cnT(n-1) for sufficiently large values of n. We already know that T(n-1) is bounded below by (n-1)!; so, if we take c = n/(n-1)! we recover exactly our expression and we know that an upper bound is (c^n)(n!). What is the limit of c as n goes to infinity? 0. What is the maximum value assumed by [n/(n-1)!]^n?
Good luck computing that. Wolfram Alpha makes it fairly clear that the maximum value assumed by this function is around 5 or 6 for n ~ 2.5. Assuming you are convinced by that, what's the takeaway?
n! < T(n) < ~6n! for all n. n! is the Theta-bound for your recurrence.
The function is called
n*f(n-1)
times. Replacing f(n-1) with this definition gives
n*((n-1)*f(n-2)
Replacing again gives:
n*((n-1)*((n-2)*f(n-3)))
Removing brackets:
n*(n-1)*(n-2)*...(1)
This gives:
n= 3: 3*2*1 = 6
n= 4: 4*3*2*1 = 24
n= 5: 5*4*3*2*1 = 120
which is n!.
So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when it is full? (instead of doubling) When we resize by some constant C, it the time complexity always O(n)?
If you grow by some fixed constant C, then no, the runtime will not be O(n). Instead, it will be Θ(n2).
To see this, think about what happens if you do a sequence of C consecutive operations. Of those operations, C - 1 of them will take time O(1) because space already exists. The last operation will take time O(n) because it needs to reallocate the array, add space, and copy everything over. Therefore, any sequence of C operations will take time O(n + c).
So now consider what happens if you perform a sequence of n operations. Break those operations up into blocks of size C; there will be n / C of them. The total work required to perform those operations will be
(c + c) + (2c + c) + (3c + c) + ... + (n + c)
= cn / c + (c + 2c + 3c + ... + nc / c)
= n + c(1 + 2 + 3 + ... + n / c)
= n + c(n/c)(n/c + 1)/2
= n + n(n/c + 1)/2
= n + n2 / c + n / 2
= Θ(n2)
Contrast this with the math for when you double the array size whenever you need more space: the total work done is
1 + 2 + 4 + 8 + 16 + 32 + ... + n
= 1 + 2 + 4 + 8 + ... + 2log n
= 2log n + 1 - 1
= 2n - 1
= Θ(n)
Transplanted from SO Documentation.
Sums of powers of 2 — 1 + 2 + 4 + 8 + 16 + …
The sum
20 + 21 + 22 + ... + 2n-1
simplifies to 2n - 1. This explains why the maximum value that can be stored in an unsigned 32-bit integer is 232 - 1.