GSL sparse matrix eigensystems - c

Does the Gnu Scientific Library (GSL) support finding eigenvectors/values of a sparse matrix (complex or otherwise) directly?
The C library GSL supports sparse matrices (highly compressed matrices, used for whenever you have a matrix which is mostly 0), example a complex matrix:
gsl_spmatrix_complex* H_sparse = gsl_spmatrix_complex_alloc (N_states, N_states);
And GSL has operations for finding eigenvalues/eigenvectors of a dense (not sparse) matrix, so this for instance works if I inflate the matrix H beforehand (skipping any allocations/freeing of workspace, vectors, matrices etc.):
gsl_spmatrix_complex_sp2d(H_dense, H_sparse);
/*...Allocate workspace w, dense matrix evec for eigenvectors, and vector eval for eigenvalues*/
gsl_eigen_hermv(H_dense, eval,evec , w);//hermv = hermitan complex matrix, and get vectors
gsl_eigen_hermv_sort ( eval,evec,GSL_EIGEN_SORT_VAL_ASC);
Inflating the sparse matrix before doing the one operation I need to do largely defeats the purpose of having a sparse matrix in the first place.
But I can't find functions for getting the eigenvalues and eigenvectors of a sparse matrix directly. I have tried looking through the header files gsl_eigen.h, gsl_splinalg.h, and gsl_spmatrix.h and the manuals, but I can't seem to find the functions I am looking for.
So does GSL support finding eigenvectors/values of a sparse matrix directly?

Related

LAPACK from Pytrilinos for least-squares

I'm trying to solve a large sparse 30,000x1,000 matrix using the a LAPACK solver in the Trilinos package. My goal is to minimise computation time however this Lapack solver only takes square matrices. So I'm manually converting my non-square matrix (A) into a square matrix by multiplying by its transpose, essentially solving the system like so:
(AT*A)x = AT*b
What is making my solve slow are the matrix multiplications by AT steps.
Any ideas on how to fix this?
You can directly compute the QR-decomposition (wikipedia-link) and use that to solve your least squares problem.
This way you don't have to compute the matrix product A^T A.
There are multiple LAPACK routines for the computation of QR-decompositions, e.g. gels is a driver routine for general matrices (intel-link).
I must say that I don't know if Trilinos supplies that routine but it is a standard LAPACK routine.

Julia: all eigenvalues of large sparse matrix

I have a large sparse matrix, for example, 128000×128000 SparseMatrixCSC{Complex{Float64},Int64} with 1376000 stored entries.
How to quickly get all eigenvalues of the sparse matrix ? Is it possible ?
I tried eigs for 128000×128000 with 1376000 stored entries but the kernel was dead.
I use a mac book pro with 16GB memory and Julia 1.3.1 on jupyter notebook.
As far as I'm aware (and I would love to be proven wrong) there is no efficient way to get all the eigenvalues of a general sparse matrix.
The main algorithm to compute the eigenvalues of a matrix is the QR algorithm. The first step of the QR algorithm is to reduce the matrix to a Hessenberg form (in order to do the QR factorisations in O(n) time). The problem is that reducing a matrix to Hessenberg form destroys the sparsity and you just end up with a dense matrix.
There are also other methods to compute the eigenvalues of a matrix like the (inverse) power iteration, that only require matrix vector products and solving linear systems, but these only give you the largest or smallest eigenvalues, and they become expensive when you want to compute all the eigenvalues (they require storing the eigenvectors for the "deflation").
So that was in general, now if your matrix has some special structure there may some better alternatives. For example, if your matrix is symmetric, then its Hessenberg form is tridiagonal and you can compute all the eigenvalues pretty fast.
TLDR: Is it possible ? — in general, no.
P.S: I tried to keep this short but if you're interested I can give you more details on any part of the answer.

Will intel mkl sparse BLAS coogemv internally change coo to csr storage?

There is sparse version of BLAS in intel MKL, for example doing multiplication of matrix and vector, we could use
mkl_?coogemv
Computes matrix-vector product of a sparse general matrix stored in
the coordinate format with one-based indexing
But I heard that multiplication is much more efficient when doing in CSR storage.
However to generate CSR storage sparse matrix is a headache, it is much easy and understandable to generate a COO storage sparse matrix.
What I want to know is that: Will MKL change the COO storage to CSR storage internally when using coogemv? The documentation didn't say.

How to find the inverse of a Rectangular Matrix in C using GSL

I searched on Google and I couldn't find a function to calculate the inverse of Rectangular Matrix using GSL. Being that it was hard to find, an answer here would help others when they need to find an inverse of a rectangular matrix.
If it is not possible using GSL, then please suggest some alternative library which is easy to use and provides the inverse of rectangular matrix.
Yes, it is possible! You probably did not fin it because it is in the chapter Linear Algebra, not Matrices.
In GSL you first compute the LU decomposition and then use it to determine the inverse via
int gsl_linalg_LU_invert (const gsl_matrix * LU, const gsl_permutation * p, gsl_matrix * inverse)
See here for a detailed example
https://lists.gnu.org/archive/html/help-gsl/2008-11/msg00001.html

BLAS : Matrix product in C?

I would like to realize some fast operations in C language thanks to BLAS (no chance to choose another library, it is the only one available in my project).
I do the following operations:
Invert a square matrix,
Make a matrix product A*B where A is the computed inverse matrix and B a vector,
Sum two (very long) vectors.
I heard this kind of operations were possible with BLAS and were very fast. But I searched and found nothing (in C code lines, I mean) which could make me understand and apply it.
The BLAS library was written originally in Fortran. The interface to C is called CBLAS and has all functions prefixed by cblas_.
Unfortunately with BLAS you can only address directly the last two points:
sgemv (single precision) or dgemv (double precision) performs matrix-vector multiplication
saxpy (single precision) or daxpy (double precision) performs general vector-vector addition
BLAS does not deal with the more complex operation of inverting the matrix. For that there is the LAPACK library that builds on BLAS and provides linear algebra opertaions. General matrix inversion in LAPACK is done with sgetri (single precision) or dgetri (double precision), but there are other inversion routines that handle specific cases like symmetric matrices. If you are inverting the matrix only to multiply it later by a vector, that is essentially solving a system of linear equations and for that there are sgesv (single precision) and dgesv (double precision).
You can invert a matrix using BLAS operations only by essentially (re-)implementing one of the LAPACK routines.
Refer to one of the many BLAS/LAPACK implmentations for more details and examples, e.g. Intel MKL or ATLAS.
Do you really need to compute the full inverse? This is very rarely needed, very expensive, and error prone.
It is much more common to compute the inverse multiplied by a vector or matrix. This is very common, fairly cheap, and not error prone. You do not need to compute the inverse to multiply it by a vector.
If you want to compute Z = X^-1Y then you should look at the LAPACK driver routines. Usually Y in this case has only a few columns. If you really need to see all of X^-1 then you can set Y to be the full identity.
Technically, you can do what you are asking, but normally it is more stable to do:
triangular factorization, eg LU factorization, or Cholesky factorization
use a triangular solver on the factorized matrix
BLAS is quite capable of doing this. Technically, it's in 'LAPACK', but most/many BLAS implementations include LAPACK, eg OpenBLAS and Intel's MKL both do.
the function to do the LU factorization is dgetrf ("(d)ouble, (ge)neral matrix, (tr)iangular (f)actorization") http://www.netlib.org/lapack/double/dgetrf.f
then the solver is dtrsm ("(d)ouble (tr)iangular (s)olve (m)atrix") http://www.netlib.org/blas/dtrsm.f
Note that to call these from C, note that:
the function names should be in lowercase, with _ postfixed, ie dgetrf_ and dtrsm_
all the parameters should be pointers, eg int *m and double *a

Resources