void foo(int n){
int i = 1, sum = 1;
while (sum <= n) {
i++;
sum+=i;
}
}
What I feel is, the loop will terminate only if the sum becomes greater than the argument n.
And, Sum at jth iteration is: S(j) = S(j-1) + j
S(1) = S(0) + 1
S(2) = S(1) + 2
S(3) = S(2) + 3
...
S(n) = S(n-1) + n
How should I go further? I am stuck at this recurrence relation.
The loop will terminate when 1 + 2 + 3 + ... + j times becomes greater than n. But I am not convinced, if it is okay.
The classic way to prove this is to write the series out twice, in both orders ie:
S(n) = 1 + 2 + 3 + ...+ n-2 + n-1 + n
S(n) = n + (n-1) + (n-2) + ...+ 3 + 2 + 1
If you sum those term by term you get
2S(n)= n+1 + n+1 + n+1 + ... + n+1
With n terms
therefore
S(n) = n*(n+1)/2
(A result allegedly discovered by Gauss as a schoolboy)
From this it follows that it takes O(sqrt(n)) iterations.
You have almost done it. The last thing you have to note is that 1 + 2 + 3 + ... + j is (1 + j) * j / 2, or O(j^2). This is a well-known formula from math (arithmetic progression).
It will break after k iterations when
1 + 2 + .... + k > n
k*(k+1)/2 > n
k*k + k - 2*n >0
k will come to k = (-1 + sqrt(1+8n))/2 (discarding negative value)
Hence, Time complexity is sqrt(n).
Related
I did not understand how this piece of code will generate a complexity of O(log n).how many times the statements inside the loop will work?
int a = 0, i = N;
while (i > 0) {
a += i;
i /= 2;
}
Let N=16, number of iteration is
i | loop
--------
16 | 1
8 | 2
4 | 3
2 | 4
1 | 5
You can see log2(16) = 4 and the number of iterations is log2(16) + 1. Or you can create a formulation for it
f(N) = F(N/2) + 1. Let suppose we have N = 2^K and so we have:
F(N) = F(N/2) + 1
F(N) = F(N/4) + 1 + 1
F(N) = F(N/8) + 1 + 1 + 1 = F(N/(2^3)) + 3
...
...
...
F(N) = F(N/2^K) + K => F(2^K/2^K) + K => F(1) + K => 1 + K
So if N = 2K Then K = log2(N)
The number of iterations of the loop is the number of significant bits in N because i is halved at the end of each iteration. Hence the loop iterates log2(N) times for N>0 and none otherwise. Each iteration has 3 simple operations, an addition, a division and a test, so the time complexity is O(log(N)).
Given an array find all sub-arrays and multiply it by last element of sub-array and find summation .
E.g : [1,2,3]
ans = 1*1 + 2*2 +3*3 + (1+2)*2 + (2+3)*3 + (1+2+3)*3 = 53
I've tried following logic:
init_sum = sum (array)
prev = array[0];
ans = 0
for i = 1 to n:
ans += array[i]* array[i]*(init_sum-prev)
init_sum -= prev
prev = array[i]
end loop
ans += sum(array)
print(ans)
But this does not work for array with repeated elements
E.g : [1,3,3] , I'm getting ans = 88 where it should be 70.
Any hint would be appreciated .
There is a recursive formula for each element of the array.
For each element ith, we can notice that the contribution of this element to the overall sum is equaled to the sum of all sub array that end at this element times the value of this element itself.
I.e
For array data[a, b, c, d], the result will be:
result = a * a
+ ((a + b) + b) * b
+ ((a + b + c) + (b + c) + c) * c
+ ((a + b + c + d) + (b + c + d) + (c + d) + d) * d
Call the factor to be multiplied with the value of ith element is xi, we make one observation that
xi = x(i - 1) + i*data[i]
So, in case of the above array
x1 = a
x2 = ((a + b) + b) = x1 + 2*b
x3 = ((a + b + c) + (b + c) + c) = x2 + 3*c
x4 = ((a + b + c + d) + (b + c + d) + (c + d) + d) = x3 + 4*d
Thus, we have our solution
init_sum = 0
ans = 0
for i = 0 to n:
init_sum = init_sum + (i + 1)*array[i]
ans += init_sum*array[i]
end loop
print(ans)
The algorithm you are after can be divided into the following steps:
1- Generate the power set of the given array.
2- For each set, add each element in the array, pick the last element in the current set and add multiply it with the sum of the set. Add this sum to the totalsum
Lets see this in action for [1 3 3], I use the algorithm below to generate the power set and store it in res
Step 1:
void generateSubsets(int i, vector<int>& nums, vector<int>& holder, vector<vector<int>>& res)
{
if(i >= nums.size())
{
res.push_back(holder);
return;
}
holder.push_back(nums[i]);
generateSubsets(i+1,nums,holder,res);
holder.pop_back();
generateSubsets(i+1,nums,holder,res);
}
Where I call this function as:
vector<vector<int>> res;
vector<int> holder;
vector<int> nums = {1,3,3};
generateSubsets(0,nums,holder,res);
The generated subsets in res are:
[1 3 3],[1 3],[1 3],[1],[3 3],[3],[3]
Step 2 : Add each element in the set and multiply by last element of the set, so we have.
[1+3+3]*3 + [1+3]*3 + [1+3]*3 + [1]*1 + [3*3]*3 + [3]*3 + [3]*3
Cumulative totalsum will then be:
totalsum = 21 + 12 + 12 + 1 + 18 + 3 + 3 = 70
I just came around this question in a programming contest and was unable to solve it within required time constraints.So just curious to get the right approach.Any suggestions would be helpful.
Input
Given a matix a[] with n elements where n<1000.
an integer k where k<10^9
Construct a new matrix b where b[i][j]=a[i]*a[j].
Output
Number of possible submatrix with sum k.
Test case
a[]={1,-1}
k=0
output=5
explaination
b={ 1,-1,
-1, 1}
so 2 row subsets,2 column subsets and 1 complete array. so total 5.
I tried to solve using something like this https://math.stackexchange.com/questions/639172/submatrix-with-sum-k
For a sub array sum b[i][j] -> b[i + m]b[j + n], it is equals to
X = a[i]*(a[j] + a[j + 1] + ... + a[j + n])
+ a[i + 1]*(a[j] + a[j + 1] + ... + a[j + n])
+ ...
+ a[i + m]*(a[j] + a[j + 1] + ... + a[j + n])
= sum(a[i] + ..a[i + m])*sum(a[j] +... a[j + n])
So, the task is reduced to find two segments in array a, and their sums multiply together equals to k.
To find all segments sum in a, it can be done in O(n^2).
Store all of the sum in a HashSet, or similar, and we can find the answer in O(n^2) time complexity.
I've just found out how to solve this in O(n^2 log n) time (assuming each array has the same length):
for each A[i]:
for each B[j]:
if A[i] + B[j] + C.binarySearch(S - A[i] - B[j]) == S:
return (i, j, k)
Is there any way to solve this in O(n^2) time or to improve the above algorithm?
The algorithm you have ain't bad. Relative to n^2, log(n) grows so slowly that it can practically be considered a constant. For example, for n = 1000000, n^2 = 1000000000000 and log(n) = 20. Once n becomes large enough for log(n) to have any significant influence, n^2 will already be so big that the result cannot be computed anyway.
A solution, inspired by #YvesDaoust, though I'm not sure if it's exactly the same:
For every A[i], calculate the remainder R = S - A[i] which should be a combination of some B[j] and C[k];
Let j = 0 and k = |C|-1 (the last index in C);
If B[j] + C[k] < R, increase j;
If B[j] + C[k] > R, decrease k;
Repeat the two previous steps until B[j] + C[k] = R or j >= |B| or k < 0
I suggest not to complicate the algorithm too much with micro-optimizations. For any reasonably small set of numbers and it will be fast enough. If the arrays become too large for this approach, your problem would make a good candidate for Machine Learning approaches such as Hill Climbing.
if the arrays are of non-negatives
* you can trim all 3 arrays to at S => A[n] > S
* similarly, dont bother checking array C if A[aIdx] + B[bIdx] > S
prepare:
sort each array ascending +O(N.log(n))
implement binary search on each array ?O(log(N))
compute:
i=bin_search(smallest i that A[i]+B[0]+C[0]>=S); for (;i<Na;i++) { if (A[i]+B[0]+C[0]>S) break;
j=bin_search(smallest j that A[i]+B[j]+C[0]>=S); for (;j<Nb;j++) { if (A[i]+B[j]+C[0]>S) break;
ss=S-A[i]-B[j];
if (ss<0) break;
k=bin_search(ss);
if (k found) return; // found solution is: i,j,k
}
}
if I see it right and: N=max(Na,Nb,Nc), M=max(valid intervals A,B,C) ... M<=N
it is (3*N.log(N)+log(N)+M*log(N)*M*log(N)) -> O((M^2)*log(N))
the j binary search can be called just once and then iterate +1 if needed
the complexity is the same but the N has changed
for average conditions is this much much faster because M<<N
The O(N²) solution is very simple.
First consider the case of two arrays, finding A[i] + B[j] = S'.
This can be rewritten as A[i] = S' - B[j] = B'[j]: you need to find equal values in two sorted arrays. This is readily done in linear time with a merging process. (You can explicitly compute the array B' but this is unnecessary, just do it on the fly: instead of fetching B'[j], get S' - B[NB-1-j]).
Having established this procedure, it suffices to use it for all elements of C, in search of S - C[k].
Here is Python code that does that and reports all solutions. (It has been rewritten to be compact and symmetric.)
for k in range(NC):
# Find S - C[k] in top-to-tail merged A and B
i, j= 0, NB - 1
while i < NA and 0 <= j:
if A[i] + B[j] + C[k] < S:
# Move forward A
i+= 1
elif A[i] + B[j] + C[k] > S:
# Move back B
j-= 1
else:
# Found
print A[i] + B[j] + C[k], "=", A[i], "+", B[j], "+", C[k]
i+= 1; j-= 1
Execution with
A= [1, 2, 3, 4, 5, 6, 7]; NA= len(A)
B= [2, 3, 5, 7, 11]; NB= len(B)
C= [1, 1, 2, 3, 5, 7]; NC= len(C)
S= 15
gives
15 = 3 + 11 + 1
15 = 7 + 7 + 1
15 = 3 + 11 + 1
15 = 7 + 7 + 1
15 = 2 + 11 + 2
15 = 6 + 7 + 2
15 = 1 + 11 + 3
15 = 5 + 7 + 3
15 = 7 + 5 + 3
15 = 3 + 7 + 5
15 = 5 + 5 + 5
15 = 7 + 3 + 5
15 = 1 + 7 + 7
15 = 3 + 5 + 7
15 = 5 + 3 + 7
15 = 6 + 2 + 7
So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when it is full? (instead of doubling) When we resize by some constant C, it the time complexity always O(n)?
If you grow by some fixed constant C, then no, the runtime will not be O(n). Instead, it will be Θ(n2).
To see this, think about what happens if you do a sequence of C consecutive operations. Of those operations, C - 1 of them will take time O(1) because space already exists. The last operation will take time O(n) because it needs to reallocate the array, add space, and copy everything over. Therefore, any sequence of C operations will take time O(n + c).
So now consider what happens if you perform a sequence of n operations. Break those operations up into blocks of size C; there will be n / C of them. The total work required to perform those operations will be
(c + c) + (2c + c) + (3c + c) + ... + (n + c)
= cn / c + (c + 2c + 3c + ... + nc / c)
= n + c(1 + 2 + 3 + ... + n / c)
= n + c(n/c)(n/c + 1)/2
= n + n(n/c + 1)/2
= n + n2 / c + n / 2
= Θ(n2)
Contrast this with the math for when you double the array size whenever you need more space: the total work done is
1 + 2 + 4 + 8 + 16 + 32 + ... + n
= 1 + 2 + 4 + 8 + ... + 2log n
= 2log n + 1 - 1
= 2n - 1
= Θ(n)
Transplanted from SO Documentation.
Sums of powers of 2 — 1 + 2 + 4 + 8 + 16 + …
The sum
20 + 21 + 22 + ... + 2n-1
simplifies to 2n - 1. This explains why the maximum value that can be stored in an unsigned 32-bit integer is 232 - 1.