Time Complexity of Singular Value Decomposition - c

I've been trying to implement SVD in C for the past few weeks now, and currently I've been using the algorithm 6 found here, and from my understanding this algorithm will run in time O(n^5) because there are two loops (One of the loops does not go from 0 to n, I know but n^5 works as a crude bound), and inside the inner loop matrix multiplication has to be done which is an n^3 process.
However, according this website, for an n by n matrix, SVD can be calculated in O(2n^3). Does anyone know where I can find an algorithm for that time complexity?

In the event anyone is looking for an answer to this in the future, the algorithm to calculate SVD in O(n^3), if the matrix is a square matrix, is the method of Jacobi Rotations.
For more information on the specific algorithm you can look at Algorithm 7 on this website.
The notation on the website is a little confusing, because of the typos, but in the step when determining the values of d1, d2, c, and č (sorry that's the closest I could get to c with the hat on top), what they mean is that c = cos(theta), s = sin(theta), č = cos(phi) and š = sin(phi).
You can calculate these values of theta and phi by elimination and substitution or you can check out this StackExchange post to see how to calculate them.
After that it is just following that algorithm.

Computing the SVD of an m × n matrix has complexity O(mn min(n, m)). Since this is super-linear in the size of the data, it becomes computationally expensive for large data sets. However, if we have a low rank matrix, we would need only k basis vectors, where k << m, n. One way of computing the rank k approximation is to compute the SVD of the full matrix and retain only the k largest singular values and vectors.
https://arxiv.org/pdf/1710.02812.pdf

Related

Binomial coefficients using dynamic programming and one dimensional array

Most implementations of binomial coefficient computation using dynamic programming makes use of 2-dimensional arrays, as in these examples:
http://www.csl.mtu.edu/cs4321/www/Lectures/Lecture%2015%20-%20Dynamic%20Programming%20Binomial%20Coefficients.htm
http://www.geeksforgeeks.org/dynamic-programming-set-9-binomial-coefficient/
My question is, why not just compute it using a single dimensional array like this:
def C(n, r):
memo = list()
if (r > int(n/2)):
r = n - r
memo.append(1.0)
for i in range(1,r+1):
now = ((n-i+1)*memo[i-1])/i
memo.append(now)
return memo[r]
Basically using the recursive formula:
C(n,r) = ((n-r+1)/r) * C(n,r-1)
This has a O(r) complexity, while the 2 dimensional logic has a O(nr) complexity.
Am I missing something here?
If you want all of the values, then the 2D logic is certainly more efficient. The 2D logic may be more efficient for some parameters on some hardware that, e.g., lacks hardware multiply and divide. You have to be careful about integer overflow when multiplying before dividing, whereas the integer addition in the 2D recurrence is always fine. Other than that, no, the 1D recurrence is better.

How to find a sum of the first r binomial coefficients for fixed n?

I have already tried the basic way to solve this series but it takes time for the larger values of n & r. Is there any way to reduce this expression in a single expression whose time complexity doesn't depend on the value of n OR r.Range r,n<=10^5
NOTE: here r < n i.e. i have to find the sum of first r+1 terms of this series.
I have already read this question but it doesn't help me:
Algorithm to find Sum of the first r binomial coefficients for fixed n modulo m
AFAIK, there is no such expression to which it can be reduced. But it can be done in O(r) time complexity as follows.
Consider an array A, where A[i] stores nci. Then we can easily verify that A[i] = A[i-1].(n-i+1)/(i)
So
A[0] = 1;
for(int i=1;i<=r;i++){
A[i] = A[i-1].(n-i+1)/(i);
}
int ans = 0; //The required answer
for(int i=0;i<=r;i++){
ans = ans+A[i];
}
For large N, the binomial coefficients behave like the Gaussian curve (at least for the centermost values). This can be derived from the Stirling formula and is supported by the Central Limit theorem.
Then the partial sum can be approximated by the Error function.

Largest triangle in convex hull

The question has already been answered, but the main problem I am facing is in understanding one of the answers..
From
https://stackoverflow.com/a/1621913/2673063
How is the following algorithm O(n) ?
It states as
By first sorting the points / computing the convex hull (in O(n log n) time) if necessary, we can assume we have the convex polygon/hull with the points cyclically sorted in the order they appear in the polygon. Call the points 1, 2, 3, … , n. Let (variable) points A, B, and C, start as 1, 2, and 3 respectively (in the cyclic order). We will move A, B, C until ABC is the maximum-area triangle. (The idea is similar to the rotating calipers method, as used when computing the diameter (farthest pair).)
With A and B fixed, advance C (e.g. initially, with A=1, B=2, C is advanced through C=3, C=4, …) as long as the area of the triangle increases, i.e., as long as Area(A,B,C) ≤ Area(A,B,C+1). This point C will be the one that maximizes Area(ABC) for those fixed A and B. (In other words, the function Area(ABC) is unimodal as a function of C.)
Next, advance B (without changing A and C) if that increases the area. If so, again advance C as above. Then advance B again if possible, etc. This will give the maximum area triangle with A as one of the vertices. (The part up to here should be easy to prove, and simply doing this separately for each A would give O(n2). But read on.) Now advance A again, if it improves the area, etc.
Although this has three "nested" loops, note that B and C always advance "forward", and they advance at most 2n times in total (similarly A advances at most n times), so the whole thing runs in O(n) time.
As the author of the answer that is the subject of the question, I feel obliged to give a more detailed explanation of the O(n) runtime.
Firstly, just as an example, here is a figure from the paper, showing the first few steps of the algorithm, for a particular sample input (a 12-gon). First we start with A, B, C as three consecutive vertices (step 1 in the figure), advance C as long as area increases (steps 2 to 6), then advance B, and so on.
The triangles with asterisks above them are the "anchored local maxima", i.e., the ones that are best for a given A (i.e., advancing either C or B would decrease the area).
As far as the runtime being O(n): Let the "actual" value of B, in terms of the number of times it's been incremented and ignoring the wrap around, be nB, and similarly for C be nC. (In other words, B = nB % n and C = nC % n.) Now, note that,
("B is ahead of A") whatever the value of A, we have A ≤ nB < A + n
nB is always increasing
So, as A varies from 0 to n, we know that nB only varies between 0 and 2n: it can be incremented at most 2n times. Similarly nC. This shows that the running time of the algorithm, which is proportional to the total number of times A, B and C are incremented, is bounded by O(n) + O(2n) + O(2n), which is O(n).
Think about it like this: each of A, B, C are pointers that, at any given moment, point towards one of the elements of the convex hull. Due to the way the algorithm increments them, each one of them will point to each element of the convex hull at most once. Therefore, each one will iterate over a collection of O(n) elements. They will never be reset, once one of them has passed an element, it will not pass that element ever again.
Since there are 3 pointers (A, B, C), we have time complexity 3 * O(n) = O(n).
Edit:
As the code is presented in the provided link, it sounds possible that it is not O(n), since B and C wrap around the array. However, according to the description, this wrapping around does not sound necessary: before seeing the code, I imagined the method stopping the advancement of B and C past n. In that case, it would definitely be O(n). As the code is presented however, I'm not sure.
It might still be that, for some mathematical reason, B and C still iterate only O(n) times in the entirety of the algorithm, but I can't prove that. Neither can I prove that it is correct to not wrap around (as long as you take care of index out of bounds errors).

Mahalanobis distance inverting the covariance matrix

I am writing a function to take the Mahalanobis distance between two vectors. I understand that this is achieved using the equation a'*C^-1*b, where a and b are vectors and C is the covariance matrix. My question is, is there an efficient way to find the inverse of the matrix without using Gauss-Jordan elimination, or is there no way around this? I'm looking for a way to do this myself, not with any predefined functions.
I know that C is a Hermitian, positive definite matrix, so is there any way that I can algorithmically take advantage of this fact? Or is there some clever way compute the Mahalanobis distance without calculating the inverse of the covariance at all? Any help would be appreciated.
***Edit: The Mahalanobis distance equation above is incorrect. It should be
x'*C^-1*x where x = (b-a), and b and a are the two vectors whose distance we are trying to find (thanks LRPurser). The solution posited in the selected answer is therefore as follows:
d=x'*b, where b = C^-1*x
C*b = x, so solve for b using LU factorization or LDL' factorization.
You can (and should!) use LU decomposition instead of explicitly inverting the matrix: solving C x = b using a decomposition has better numeric properties than computing C^-1 and multiplying the vector b.
Since your matrix is symmetric, an LU decomposition is effectively equivalent to an LDL* decomposition, which is what you should actually use in your case. Since your matrix is also positive-definite, you should be able to perform this decomposition without pivoting.
Edit: note that, for this application, you don't need to solve the full C x = b problem.
Instead, given C = L D L* and difference vector v = a-b, solve L* y = v for y (which is half as much work as the full LU solver).
Then, y^t D^-1 y = v^t C^-1 v can be computed in linear time.
First Mahalanobis Distance (MD) is the normed distance with respect to uncertainty in the measurement of two vectors. When C=Indentity matrix, MD reduces to the Euclidean distance and thus the product reduces to the vector norm. Also MD is always positive definite or greater than zero for all non-zero vectors. By your formulation with the appropriate choice of the vectors a and b, a*C^-1*b can be less than zero. Hopefully the difference of vectors you are looking for is x=(a-b) which makes the calculation x^t*C^-1*x where x^t is the transpose of the vector x. Also note that MD=sqrt(x^t*C^-1*x)
Since your matrix is symmetric and positive definite then you can utilize the Cholesky decomposition (MatLab-chol) which uses half of the operations as LU and is numerically more stable. chol(C)=L where C=L*L^t where L is a lower triangular matrix and L^t is the transpose of L which make it upper triangular. Your algorithm should go something like this
(Matlab)
x=a-b;
L=chol(C);
z=L\x;
MD=z'*z;
MD=sqrt(MD);
# Mahalanobis Distance Matrix
import numpy as np
from scipy.spatial import distance
from scipy.spatial.distance import mahalanobis
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
# Example
Data = np.array([[1,2],[3,2],[4,3]])
Cov = np.cov(np.transpose([[1,2],[3,2],[4,3]]))
invCov = np.linalg.inv(Cov)
Y = pdist(Data, 'mahalanobis', invCov)
MD = squareform(Y)

fast algorithm of finding sums in array

I am looking for a fast algorithm:
I have a int array of size n, the goal is to find all patterns in the array that
x1, x2, x3 are different elements in the array, such that x1+x2 = x3
For example I know there's a int array of size 3 is [1, 2, 3] then there's only one possibility: 1+2 = 3 (consider 1+2 = 2+1)
I am thinking about implementing Pairs and Hashmaps to make the algorithm fast. (the fastest one I got now is still O(n^2))
Please share your idea for this problem, thank you
Edit: The answer below applies to a version of this problem in which you only want one triplet that adds up like that. When you want all of them, since there are potentially at least O(n^2) possible outputs (as pointed out by ex0du5), and even O(n^3) in pathological cases of repeated elements, you're not going to beat the simple O(n^2) algorithm based on hashing (mapping from a value to the list of indices with that value).
This is basically the 3SUM problem. Without potentially unboundedly large elements, the best known algorithms are approximately O(n^2), but we've only proved that it can't be faster than O(n lg n) for most models of computation.
If the integer elements lie in the range [u, v], you can do a slightly different version of this in O(n + (v-u) lg (v-u)) with an FFT. I'm going to describe a process to transform this problem into that one, solve it there, and then figure out the answer to your problem based on this transformation.
The problem that I know how to solve with FFT is to find a length-3 arithmetic sequence in an array: that is, a sequence a, b, c with c - b = b - a, or equivalently, a + c = 2b.
Unfortunately, the last step of the transformation back isn't as fast as I'd like, but I'll talk about that when we get there.
Let's call your original array X, which contains integers x_1, ..., x_n. We want to find indices i, j, k such that x_i + x_j = x_k.
Find the minimum u and maximum v of X in O(n) time. Let u' be min(u, u*2) and v' be max(v, v*2).
Construct a binary array (bitstring) Z of length v' - u' + 1; Z[i] will be true if either X or its double [x_1*2, ..., x_n*2] contains u' + i. This is O(n) to initialize; just walk over each element of X and set the two corresponding elements of Z.
As we're building this array, we can save the indices of any duplicates we find into an auxiliary list Y. Once Z is complete, we just check for 2 * x_i for each x_i in Y. If any are present, we're done; otherwise the duplicates are irrelevant, and we can forget about Y. (The only situation slightly more complicated is if 0 is repeated; then we need three distinct copies of it to get a solution.)
Now, a solution to your problem, i.e. x_i + x_j = x_k, will appear in Z as three evenly-spaced ones, since some simple algebraic manipulations give us 2*x_j - x_k = x_k - 2*x_i. Note that the elements on the ends are our special doubled entries (from 2X) and the one in the middle is a regular entry (from X).
Consider Z as a representation of a polynomial p, where the coefficient for the term of degree i is Z[i]. If X is [1, 2, 3, 5], then Z is 1111110001 (because we have 1, 2, 3, 4, 5, 6, and 10); p is then 1 + x + x2 + x3 + x4 + x5 + x9.
Now, remember from high school algebra that the coefficient of xc in the product of two polynomials is the sum over all a, b with a + b = c of the first polynomial's coefficient for xa times the second's coefficient for xb. So, if we consider q = p2, the coefficient of x2j (for a j with Z[j] = 1) will be the sum over all i of Z[i] * Z[2*j - i]. But since Z is binary, that's exactly the number of triplets i,j,k which are evenly-spaced ones in Z. Note that (j, j, j) is always such a triplet, so we only care about ones with values > 1.
We can then use a Fast Fourier Transform to find p2 in O(|Z| log |Z|) time, where |Z| is v' - u' + 1. We get out another array of coefficients; call it W.
Loop over each x_k in X. (Recall that our desired evenly-spaced ones are all centered on an element of X, not 2*X.) If the corresponding W for twice this element, i.e. W[2*(x_k - u')], is 1, we know it's not the center of any nontrivial progressions and we can skip it. (As argued before, it should only be a positive integer.)
Otherwise, it might be the center of a progression that we want (so we need to find i and j). But, unfortunately, it might also be the center of a progression that doesn't have our desired form. So we need to check. Loop over the other elements x_i of X, and check if there's a triple with 2*x_i, x_k, 2*x_j for some j (by checking Z[2*(x_k - x_j) - u']). If so, we have an answer; if we make it through all of X without a hit, then the FFT found only spurious answers, and we have to check another element of W.
This last step is therefore O(n * 1 + (number of x_k with W[2*(x_k - u')] > 1 that aren't actually solutions)), which is maybe possibly O(n^2), which is obviously not okay. There should be a way to avoid generating these spurious answers in the output W; if we knew that any appropriate W coefficient definitely had an answer, this last step would be O(n) and all would be well.
I think it's possible to use a somewhat different polynomial to do this, but I haven't gotten it to actually work. I'll think about it some more....
Partially based on this answer.
It has to be at least O(n^2) as there are n(n-1)/2 different sums possible to check for other members. You have to compute all those, because any pair summed may be any other member (start with one example and permute all the elements to convince yourself that all must be checked). Or look at fibonacci for something concrete.
So calculating that and looking up members in a hash table gives amortised O(n^2). Or use an ordered tree if you need best worst-case.
You essentially need to find all the different sums of value pairs so I don't think you're going to do any better than O(n2). But you can optimize by sorting the list and reducing duplicate values, then only pairing a value with anything equal or greater, and stopping when the sum exceeds the maximum value in the list.

Resources