Sum of subarray multiplied by last element of subarray - arrays

Given an array find all sub-arrays and multiply it by last element of sub-array and find summation .
E.g : [1,2,3]
ans = 1*1 + 2*2 +3*3 + (1+2)*2 + (2+3)*3 + (1+2+3)*3 = 53
I've tried following logic:
init_sum = sum (array)
prev = array[0];
ans = 0
for i = 1 to n:
ans += array[i]* array[i]*(init_sum-prev)
init_sum -= prev
prev = array[i]
end loop
ans += sum(array)
print(ans)
But this does not work for array with repeated elements
E.g : [1,3,3] , I'm getting ans = 88 where it should be 70.
Any hint would be appreciated .

There is a recursive formula for each element of the array.
For each element ith, we can notice that the contribution of this element to the overall sum is equaled to the sum of all sub array that end at this element times the value of this element itself.
I.e
For array data[a, b, c, d], the result will be:
result = a * a
+ ((a + b) + b) * b
+ ((a + b + c) + (b + c) + c) * c
+ ((a + b + c + d) + (b + c + d) + (c + d) + d) * d
Call the factor to be multiplied with the value of ith element is xi, we make one observation that
xi = x(i - 1) + i*data[i]
So, in case of the above array
x1 = a
x2 = ((a + b) + b) = x1 + 2*b
x3 = ((a + b + c) + (b + c) + c) = x2 + 3*c
x4 = ((a + b + c + d) + (b + c + d) + (c + d) + d) = x3 + 4*d
Thus, we have our solution
init_sum = 0
ans = 0
for i = 0 to n:
init_sum = init_sum + (i + 1)*array[i]
ans += init_sum*array[i]
end loop
print(ans)

The algorithm you are after can be divided into the following steps:
1- Generate the power set of the given array.
2- For each set, add each element in the array, pick the last element in the current set and add multiply it with the sum of the set. Add this sum to the totalsum
Lets see this in action for [1 3 3], I use the algorithm below to generate the power set and store it in res
Step 1:
void generateSubsets(int i, vector<int>& nums, vector<int>& holder, vector<vector<int>>& res)
{
if(i >= nums.size())
{
res.push_back(holder);
return;
}
holder.push_back(nums[i]);
generateSubsets(i+1,nums,holder,res);
holder.pop_back();
generateSubsets(i+1,nums,holder,res);
}
Where I call this function as:
vector<vector<int>> res;
vector<int> holder;
vector<int> nums = {1,3,3};
generateSubsets(0,nums,holder,res);
The generated subsets in res are:
[1 3 3],[1 3],[1 3],[1],[3 3],[3],[3]
Step 2 : Add each element in the set and multiply by last element of the set, so we have.
[1+3+3]*3 + [1+3]*3 + [1+3]*3 + [1]*1 + [3*3]*3 + [3]*3 + [3]*3
Cumulative totalsum will then be:
totalsum = 21 + 12 + 12 + 1 + 18 + 3 + 3 = 70

Related

Exponentiation complexity

Based on this code:
int poow(int x,int y)
{
if(y==0)
return 1;
if(y%2!= 0)
return poow(x,y-1)*x;
return poow(x,y/2)*poow(x,y/2); //this line
}
I tried to see the complexity: We suppose that we have n=2^k
we have T(0)=1
T(n)=2*T(n/2)+C
T(n)=2^i * T(n/2^i)+i*c
for i=k we have T(n)=2^k * T(n/2^k) + k * c
T(n)=2^k * T(1) + k*c
T(n)=2^k * c2 + k * c
I am stuck here ? How can I continue the computation of complexity and what is the difference when changing this line:
return poow(x,y/2)*poow(x,y/2); //this line
with
int p=poow(x,y/2);
return p*p;
in term of complexity !
Start off with a proper recurrence. The complexity is solely based on y, so we can write the recurrence as
T(0) = 1
T(y) = y is even: 2 * T(y / 2)
y is odd: T(y - 1) + 1
Worst-case would be that every division by 2 leaves us with a odd number, which would lead to the complexity of
T(2^n-1) = 1 + 2 * (1 + 2 * (1 + 2 * ( ... * T(1)))) =
= 2 ^ 0 + 2 ^ 1 + 2 ^ 2 + 2 ^ 3 + ... + 2 ^ (n - 1) + 2 ^ (n - 1) =
= 2 ^ n - 1 + 2 ^(n - 1) = 3 * 2 ^ (n - 1) - 1
T(y) = O(y)
Best-case would be a power of 2:
T(2^n) = 2 * 2 * ... * 2 * T(1) = 2 ^ n * (1 + 1) = 2 ^ (n + 1) = 2 * 2 ^ n
T(y) = O(y)
Now what if we optimized the whole function?
T'(0) = 1
T'(y) = y is even: T(y / 2) + 1
y is odd: T(y - 1) + 1
Worst case:
T'(2^n - 1) = T(2^n - 2) + 1 = T(2^(n - 1) - 1) + 1 + 1 = ... =
= T(1) + 1 + 1 + 1 + ... =
= 2 + 1 + 1 + 1 + ... =
= 1 + ln(2^n) / ln(2) * 2 =
= 1 + 2 * n
T'(y) = O(log y)
Best case:
T'(2 ^ n) = T(1) + 1 + 1 + ... =
= 2 + 1 + 1 + ... =
= 2 + ln(2^n) / ln(2)
= n + 2
T'(y) = O(log y)
So the optimized version is definitely faster (linear vs logarithmic complexity).
Sometime code analysis focuses on the theory and not the actual.
Code has a bug.
if(y%2!= 0)
return poow(x,y-1)*x;
// should be
if(y%2!= 0)
return poow(x,y-y%2)*x;
// better alternative, use
unsigned poow(unsigned x, unsigned y)
Without the fix, calling poow(1,-1) causes infinite recursion and likely stack-overflow. So O(∞) regardless in the last line is return poow(x,y/2)*poow(x,y/2); or int p=poow(x,y/2);
return p*p;.

Sum of all subparts of an array of integers

Given an array {1,3,5,7}, its subparts are defined as {1357,135,137,157,357,13,15,17,35,37,57,1,3,5,7}.
I have to find the sum of all these numbers in the new array. In this case sum comes out to be 2333.
Please help me find a solution in O(n). My O(n^2) solution times out.
link to the problem is here or here.
My current attempt( at finding a pattern) is
for(I=0 to len) //len is length of the array
{
for(j=0 to len-i)
{
sum+= arr[I]*pow(10,j)*((len-i) C i)*pow(2,i)
}
}
In words - len-i C i = (number of integers to right) C weight. (combinations {from permutation and combination})
2^i = 2 power (number of integers to left)
Thanks
You can easily solve this problem with a simple recursive.
def F(arr):
if len(arr) == 1:
return (arr[0], 1)
else:
r = F(arr[:-1])
return (11 * r[0] + (r[1] + 1) * arr[-1], 2 * r[1] + 1)
So, how does it work? It is simple. Let say we want to compute the sum of all subpart of {1,3,5,7}. Let assume that we know the number of combinatiton of {1,3,5} and the sum of subpart of {1,3,5} and we can easily compute the {1,3,5,7} using the following formula:
SUM_SUBPART({1,3,5,7}) = 11 * SUM_SUBPART({1,3,5}) + NUMBER_COMBINATION({1,3,5}) * 7 + 7
This formula can easily be derived by observing. Let say we have all combination of {1,3,5}
A = [135, 13, 15, 35, 1, 3, 5]
We can easily create a list of {1,3,5,7} by
A = [135, 13, 15, 35, 1, 3, 5] +
[135 * 10 + 7,
13 * 10 + 7,
15 * 10 + 7,
35 * 10 + 7,
1 * 10 + 7,
3 * 10 + 7,
5 * 10 + 7] + [7]
Well, you could look at at the subparts as sums of numbers:
1357 = 1000*1 + 100*3 + 10*5 + 1*7
135 = 100*1 + 10*3 + 1*5
137 = 100*1 + 10*3 + 1*7
etc..
So, all you need to do is sum up the numbers you have, and then according to the number of items work out what is the multiplier:
Two numbers [x, y]:
[x, y, 10x+y, 10y+x]
=> your multiplier is 1 + 10 + 1 = 12
Three numbers [x, y, z]:
[x, y, z,
10x+y, 10x+z,
10y+x, 10y+z,
10z+x, 10z+y,
100x+10y+z, 100x10z+y
.
. ]
=> you multiplier is 1+10+10+1+1+100+100+10+10+1+1=245
You can easily work out the equation for n numbers....
If you expand invisal's recursive solution you get this explicit formula:
subpart sum = sum for k=0 to N-1: 11^(N-k) * 2^k * a[k]
This suggests the following O(n) algorithm:
multiplier = 1
for k from 0 to N-1:
a[k] = a[k]*multiplier
multiplier = multiplier*2
multiplier = 1
sum = 0
for k from N-1 to 0:
sum = sum + a[k]*multiplier
multiplier = multiplier*11
Multiplication and addition should be done modulo M of course.

Given 3 sorted arrays A, B, C and number S, find i, j, k such that A[i] + B[j] + C[k] = S

I've just found out how to solve this in O(n^2 log n) time (assuming each array has the same length):
for each A[i]:
for each B[j]:
if A[i] + B[j] + C.binarySearch(S - A[i] - B[j]) == S:
return (i, j, k)
Is there any way to solve this in O(n^2) time or to improve the above algorithm?
The algorithm you have ain't bad. Relative to n^2, log(n) grows so slowly that it can practically be considered a constant. For example, for n = 1000000, n^2 = 1000000000000 and log(n) = 20. Once n becomes large enough for log(n) to have any significant influence, n^2 will already be so big that the result cannot be computed anyway.
A solution, inspired by #YvesDaoust, though I'm not sure if it's exactly the same:
For every A[i], calculate the remainder R = S - A[i] which should be a combination of some B[j] and C[k];
Let j = 0 and k = |C|-1 (the last index in C);
If B[j] + C[k] < R, increase j;
If B[j] + C[k] > R, decrease k;
Repeat the two previous steps until B[j] + C[k] = R or j >= |B| or k < 0
I suggest not to complicate the algorithm too much with micro-optimizations. For any reasonably small set of numbers and it will be fast enough. If the arrays become too large for this approach, your problem would make a good candidate for Machine Learning approaches such as Hill Climbing.
if the arrays are of non-negatives
* you can trim all 3 arrays to at S => A[n] > S
* similarly, dont bother checking array C if A[aIdx] + B[bIdx] > S
prepare:
sort each array ascending +O(N.log(n))
implement binary search on each array ?O(log(N))
compute:
i=bin_search(smallest i that A[i]+B[0]+C[0]>=S); for (;i<Na;i++) { if (A[i]+B[0]+C[0]>S) break;
j=bin_search(smallest j that A[i]+B[j]+C[0]>=S); for (;j<Nb;j++) { if (A[i]+B[j]+C[0]>S) break;
ss=S-A[i]-B[j];
if (ss<0) break;
k=bin_search(ss);
if (k found) return; // found solution is: i,j,k
}
}
if I see it right and: N=max(Na,Nb,Nc), M=max(valid intervals A,B,C) ... M<=N
it is (3*N.log(N)+log(N)+M*log(N)*M*log(N)) -> O((M^2)*log(N))
the j binary search can be called just once and then iterate +1 if needed
the complexity is the same but the N has changed
for average conditions is this much much faster because M<<N
The O(N²) solution is very simple.
First consider the case of two arrays, finding A[i] + B[j] = S'.
This can be rewritten as A[i] = S' - B[j] = B'[j]: you need to find equal values in two sorted arrays. This is readily done in linear time with a merging process. (You can explicitly compute the array B' but this is unnecessary, just do it on the fly: instead of fetching B'[j], get S' - B[NB-1-j]).
Having established this procedure, it suffices to use it for all elements of C, in search of S - C[k].
Here is Python code that does that and reports all solutions. (It has been rewritten to be compact and symmetric.)
for k in range(NC):
# Find S - C[k] in top-to-tail merged A and B
i, j= 0, NB - 1
while i < NA and 0 <= j:
if A[i] + B[j] + C[k] < S:
# Move forward A
i+= 1
elif A[i] + B[j] + C[k] > S:
# Move back B
j-= 1
else:
# Found
print A[i] + B[j] + C[k], "=", A[i], "+", B[j], "+", C[k]
i+= 1; j-= 1
Execution with
A= [1, 2, 3, 4, 5, 6, 7]; NA= len(A)
B= [2, 3, 5, 7, 11]; NB= len(B)
C= [1, 1, 2, 3, 5, 7]; NC= len(C)
S= 15
gives
15 = 3 + 11 + 1
15 = 7 + 7 + 1
15 = 3 + 11 + 1
15 = 7 + 7 + 1
15 = 2 + 11 + 2
15 = 6 + 7 + 2
15 = 1 + 11 + 3
15 = 5 + 7 + 3
15 = 7 + 5 + 3
15 = 3 + 7 + 5
15 = 5 + 5 + 5
15 = 7 + 3 + 5
15 = 1 + 7 + 7
15 = 3 + 5 + 7
15 = 5 + 3 + 7
15 = 6 + 2 + 7

What is the running time complexity of the following piece of code?

void foo(int n){
int i = 1, sum = 1;
while (sum <= n) {
i++;
sum+=i;
}
}
What I feel is, the loop will terminate only if the sum becomes greater than the argument n.
And, Sum at jth iteration is: S(j) = S(j-1) + j
S(1) = S(0) + 1
S(2) = S(1) + 2
S(3) = S(2) + 3
...
S(n) = S(n-1) + n
How should I go further? I am stuck at this recurrence relation.
The loop will terminate when 1 + 2 + 3 + ... + j times becomes greater than n. But I am not convinced, if it is okay.
The classic way to prove this is to write the series out twice, in both orders ie:
S(n) = 1 + 2 + 3 + ...+ n-2 + n-1 + n
S(n) = n + (n-1) + (n-2) + ...+ 3 + 2 + 1
If you sum those term by term you get
2S(n)= n+1 + n+1 + n+1 + ... + n+1
With n terms
therefore
S(n) = n*(n+1)/2
(A result allegedly discovered by Gauss as a schoolboy)
From this it follows that it takes O(sqrt(n)) iterations.
You have almost done it. The last thing you have to note is that 1 + 2 + 3 + ... + j is (1 + j) * j / 2, or O(j^2). This is a well-known formula from math (arithmetic progression).
It will break after k iterations when
1 + 2 + .... + k > n
k*(k+1)/2 > n
k*k + k - 2*n >0
k will come to k = (-1 + sqrt(1+8n))/2 (discarding negative value)
Hence, Time complexity is sqrt(n).

Differentiating sums with Maxima

I have the following sum:
sum((R[i]-(a*X[i]+b)*t + 1/2*(c*X[i]+d)^2*t)^2/((c*X[i]+d)^2*t), i, 1, N);
which I want to differenciate wrt. a:
diff(%, a);
but Maxima (wxMaxima to be precise) just prints d/da . Can I
make it actually differentiate the sum (so because N is finite is
should differentiate every element in the sum separately)?
If I set N to some constant, e.g.:
sum((R[i]-(a*X[i]+b)*t + 1/2*(c*X[i]+d)^2*t)^2/((c*X[i]+d)^2*t), i, 1, 100);
then I get explicit sum of 100 elements (takes about 2 pages), and
then differentiation works (but again I get 2 pages instead of a small
sum). Can I get this result displayed as a sum?
Which version of Maxima do you use ?
Here is my session of Maxima with you equation differentiated wrt.a and than substituted to N=100.
~$ maxima
Maxima 5.24.0 http://maxima.sourceforge.net
using Lisp SBCL 1.0.51
Distributed under the GNU Public License. See the file COPYING.
Dedicated to the memory of William Schelter.
The function bug_report() provides bug reporting information.
(%i1) sum((R[i]-(a*X[i]+b)*t + 1/2*(c*X[i]+d)^2*t)^2/((c*X[i]+d)^2*t), i, 1, N);
2
(c X + d) t
N i 2
==== (------------- - (a X + b) t + R )
\ 2 i i
> ------------------------------------
/ 2
==== (c X + d)
i = 1 i
(%o1) ------------------------------------------
t
(%i2) diff(%, a);
2
(c X + d) t
N i
==== X (------------- - (a X + b) t + R )
\ i 2 i i
(%o2) - 2 > --------------------------------------
/ 2
==== (c X + d)
i = 1 i
(%i3) %, N=100;
2
(c X + d) t
100 i
==== X (------------- - (a X + b) t + R )
\ i 2 i i
(%o3) - 2 > --------------------------------------
/ 2
==== (c X + d)
i = 1 i

Resources