Finding eigenvalues of large, sparse matrix - sparse-matrix

I am working on Fermion and Boson Hubbard Model, in which dimension of Hilbert Space are quite large (~50k). I am currently using the Lapack routine DSYEV to determine the eigenvalues & eigenfunctions of the large (50k x 50k) Hamiltonian matrix, but this takes a long time, about 8 hours on a Xeon workstation.
I would like to reduce this run time on this particular machine. I am looking at the Lanczos method and wondering if this is the best option, or if there is another choice.

Lanczos (or other iterative) method is used to compute extreme (small/big) eigenvalues. It is better than direct DSYEV, if you need eigenvalues and eigenfunctions much less than the system size (50k). Especially, if the matrix you have is sparse then the acceleration you will get is much better.
If you are looking for all eigenvalues and your matrix is dense then the better method is direct DSYEV.

Related

LAPACK from Pytrilinos for least-squares

I'm trying to solve a large sparse 30,000x1,000 matrix using the a LAPACK solver in the Trilinos package. My goal is to minimise computation time however this Lapack solver only takes square matrices. So I'm manually converting my non-square matrix (A) into a square matrix by multiplying by its transpose, essentially solving the system like so:
(AT*A)x = AT*b
What is making my solve slow are the matrix multiplications by AT steps.
Any ideas on how to fix this?
You can directly compute the QR-decomposition (wikipedia-link) and use that to solve your least squares problem.
This way you don't have to compute the matrix product A^T A.
There are multiple LAPACK routines for the computation of QR-decompositions, e.g. gels is a driver routine for general matrices (intel-link).
I must say that I don't know if Trilinos supplies that routine but it is a standard LAPACK routine.

Julia: all eigenvalues of large sparse matrix

I have a large sparse matrix, for example, 128000×128000 SparseMatrixCSC{Complex{Float64},Int64} with 1376000 stored entries.
How to quickly get all eigenvalues of the sparse matrix ? Is it possible ?
I tried eigs for 128000×128000 with 1376000 stored entries but the kernel was dead.
I use a mac book pro with 16GB memory and Julia 1.3.1 on jupyter notebook.
As far as I'm aware (and I would love to be proven wrong) there is no efficient way to get all the eigenvalues of a general sparse matrix.
The main algorithm to compute the eigenvalues of a matrix is the QR algorithm. The first step of the QR algorithm is to reduce the matrix to a Hessenberg form (in order to do the QR factorisations in O(n) time). The problem is that reducing a matrix to Hessenberg form destroys the sparsity and you just end up with a dense matrix.
There are also other methods to compute the eigenvalues of a matrix like the (inverse) power iteration, that only require matrix vector products and solving linear systems, but these only give you the largest or smallest eigenvalues, and they become expensive when you want to compute all the eigenvalues (they require storing the eigenvectors for the "deflation").
So that was in general, now if your matrix has some special structure there may some better alternatives. For example, if your matrix is symmetric, then its Hessenberg form is tridiagonal and you can compute all the eigenvalues pretty fast.
TLDR: Is it possible ? — in general, no.
P.S: I tried to keep this short but if you're interested I can give you more details on any part of the answer.

Finding k-smallest eigen values and its corresponding eigen vector for large matrix

For a symmetric sparse square matrix of size 300,000*300,000, what is best way to find 10 smallest Eigenvalues and its corresponding Eigenvectors within an hours or so in any language or packages.
The Lanczos algorithm, which operates on a Hermitian matrix, is one good way to find the lowest and greatest eigenvalues and corresponding eigenvectors. Note that a real symmetric matrix is by definition Hermitian. Lanczos requires O(N) storage and also roughly O(N) time to evaluate the extreme eigenvalues/eigenvectors. This contrasts with brute force diagonalization which requires O(N^2) storage and O(N^3) running time. For this reason, the Lanczos algorithm made possible approximate solutions to many problems which previously were not computationally feasible.
Here is a useful link to a UC Davis site, which lists implementations of Lanczos in a number of languages/packages, including FORTRAN, C/C++, and MATLAB.

Fastest way to obtain the singular value decomposition of a bidiagonal matrix (using LAPACK)?

I am looking to find the fastest code/algorithm/package for obtaining the singular value decomposition (SVD) of a real, square bidiagonal matrix. The matrices I am working with are fairly small - typically somewhere around 15x15 in size. However, I am performing this SVD thousands (maybe millions) of times, so the computational cost becomes quite large.
I am writing all of my code in C. I assume the fastest codes for performing the SVD will probably come from LAPACK. I have been looking into LAPACK and it seems like I have quite a few different options for performing the SVD of real, bidiagonal matrices:
dbdsqr - uses a zero-shift QR algorithm to get the SVD of a bidiagonal matrix
dbdsdc - uses a divide and conquer algorithm to get the SVD of a bidiagonal matrix
dgesvd - not sure exactly how this one works, but it can handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dgesdd - not quire sure how this one works either, but it can also handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dstemr - estimates eigenvectors/eigenvalues for a tridiagonal matrix; I can use it to estimate the left singular vectors/values by finding the eigenvectors/values of A*A'; I can then estimate the right singular vectors/values by finding the eigenvectors/values of A'*A
dstemr - perhaps there is an even faster way to use dstemr to get the SVD... please enlighten me if you know of a way
I have no idea which of these methods is fastest. Is there an even faster way to get the SVD of a real, bidiagonal matrix that I haven't listed above?
Ideally I am looking for a small example C code to demonstrate the fastest possible SVD for a relatively small real, bidiagonal matrix.

How to find a better algorithm to compute eigenvalue and eigenvector of a very large matrix

I have used Jacobi method to find all eigenvalues and eigenvectors in c code. Though the complexity of Jacobi method is O(n^3) but the dimension of my matrix is huge (17814 X 17814). It takes a lot of time. I want to know a better algorithm by which I can solve this problem. If you want I can attach my c code.
The algorithm suggested in the comments is not necessarily the best one.
As you can see here, the Jacobi method can be vastly faster when using special techniques.
On top of that, Jacobi is quite easy to run in parallel, and it's much faster for sparse matrices than for dense matrices so you can take advantage of that as well, depending on your architecture and the type of matrix you have.
I'd say that the best thing is to test a few different methods and see in practice where you can get the best results.
O(n^2.376) is not necessarily better than O(n^3) depending on constants.

Resources