Inverting a matrix of any size - c

I'm using the GNU Scientific Library in implementing a calculator which needs to be able to raise matrices to powers. Unfortunately, there doesn't seem to be such a function available in the GSL for even multiplying matrices (the gsl_matrix_mul_elements() function only multiplies using the addition process), and by extension, no raising to powers.
I want to be able to raise to negative powers, which requires the ability to take the inverse. From my searching around, I have not been able to find any sound code for calculating the inverses of arbitrary matrices (only ones with defined dimensions), and the guides I found for doing it by hand use clever "on-paper tricks" that don't really work in code.
Is there a common algorithm that can be used to compute the inverse of a matrix of any size (failing of course when the inverse cannot be calculated)?

As mentioned in the comments, power of matrices can be computed for square matrices for integer exponents. The n power of A is A^n = A*A*...A where A appears n times. If B is the inverse of A, then the -n power of A is A^(-n) = (A^-1)^n = B^n = B*B*...B.
So in order to compute the n power of A I can suggest the following algorithm using GSL:
gsl_matrix_set_identity(); // initialize An as I
for(i=0;i<n;i++) gsl_blas_dgemm(); // compute recursive product of A
For computing B matrix you can use the following routine
gsl_linalg_LU_decomp(); // compute A decomposition
gsl_linalg_complex_LU_invert // comput inverse from decomposition

I recommend reading up about the SVD, which the gsl implements. If your matrix is invertible, then computing it via the SVD is a not bad, though somewhat slow, way to go. If your matrix is not invertible, the SVD allows you to compute the next best thing, the generalised inverse.
In matrix calculations the errors inherent in floating point arithmetic can accumulate alarmingly. One example is the Hilbert matrix, an innocent looking thing with a remarkably large condition number, even for quite moderate dimension. A good test of an inversion routine is to see how big a Hilbert matrix it can invert, and how close the computed inverse times the matrix is to the identity.

Related

Julia: all eigenvalues of large sparse matrix

I have a large sparse matrix, for example, 128000×128000 SparseMatrixCSC{Complex{Float64},Int64} with 1376000 stored entries.
How to quickly get all eigenvalues of the sparse matrix ? Is it possible ?
I tried eigs for 128000×128000 with 1376000 stored entries but the kernel was dead.
I use a mac book pro with 16GB memory and Julia 1.3.1 on jupyter notebook.
As far as I'm aware (and I would love to be proven wrong) there is no efficient way to get all the eigenvalues of a general sparse matrix.
The main algorithm to compute the eigenvalues of a matrix is the QR algorithm. The first step of the QR algorithm is to reduce the matrix to a Hessenberg form (in order to do the QR factorisations in O(n) time). The problem is that reducing a matrix to Hessenberg form destroys the sparsity and you just end up with a dense matrix.
There are also other methods to compute the eigenvalues of a matrix like the (inverse) power iteration, that only require matrix vector products and solving linear systems, but these only give you the largest or smallest eigenvalues, and they become expensive when you want to compute all the eigenvalues (they require storing the eigenvectors for the "deflation").
So that was in general, now if your matrix has some special structure there may some better alternatives. For example, if your matrix is symmetric, then its Hessenberg form is tridiagonal and you can compute all the eigenvalues pretty fast.
TLDR: Is it possible ? — in general, no.
P.S: I tried to keep this short but if you're interested I can give you more details on any part of the answer.

counterintuitive speed difference between LM and shift-invert modes in scipy.sparse.linalg.eigsh?

I'm trying to find the smallest (as in most negative, not lowest magnitude) several eigenvalues of a list of sparse Hermitian matrices in Python using scipy.sparse.linalg.eigsh. The matrices are ~1000x1000, and the list length is ~500-2000. In addition, I know upper and lower bounds on the eigenvalues of all the matrices -- call them eig_UB and eig_LB, respectively.
I've tried two methods:
Using shift-invert mode with sigma=eig_LB.
Subtracting eig_UB from the diagonal of each matrix (thus shifting the smallest eigenvalues to be the largest magnitude eigenvalues), diagonalizing the resulting matrices with default eigsh settings (no shift-invert mode and using which='LM'), and then adding eig_UB to the resulting eigenvalues.
Both methods work and their results agree, but method 1 is around 2-2.5x faster. This seems counterintuitive, since (at least as I understand the eigsh documentation) shift-invert mode subtracts sigma from the diagonal, inverts the matrix, and then finds eigenvalues, whereas default mode directly finds the largest magnitude eigenvalues. Does anyone know what could explain the difference in performance?
One other piece of information: I've checked, and the matrices that result from shift-inverting (that is, (M-sigma*identity)^(-1) if M is the original matrix) are no longer sparse, which seems like it should make finding their eigenvalues take even longer.
This is probably resolved. As pointed out in https://arxiv.org/pdf/1504.06768.pdf, you don't actually need to invert the shifted sparse matrix and then repeatedly apply it in some Lanczos-type method -- you just need to repeatedly solve an inverse problem (M-sigma*identity)*v(n+1)=v(n) to generate a sequence of vectors {v(n)}. This inverse problem can be done quickly for a sparse matrix after LU decomposition.

Fastest way to obtain the singular value decomposition of a bidiagonal matrix (using LAPACK)?

I am looking to find the fastest code/algorithm/package for obtaining the singular value decomposition (SVD) of a real, square bidiagonal matrix. The matrices I am working with are fairly small - typically somewhere around 15x15 in size. However, I am performing this SVD thousands (maybe millions) of times, so the computational cost becomes quite large.
I am writing all of my code in C. I assume the fastest codes for performing the SVD will probably come from LAPACK. I have been looking into LAPACK and it seems like I have quite a few different options for performing the SVD of real, bidiagonal matrices:
dbdsqr - uses a zero-shift QR algorithm to get the SVD of a bidiagonal matrix
dbdsdc - uses a divide and conquer algorithm to get the SVD of a bidiagonal matrix
dgesvd - not sure exactly how this one works, but it can handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dgesdd - not quire sure how this one works either, but it can also handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dstemr - estimates eigenvectors/eigenvalues for a tridiagonal matrix; I can use it to estimate the left singular vectors/values by finding the eigenvectors/values of A*A'; I can then estimate the right singular vectors/values by finding the eigenvectors/values of A'*A
dstemr - perhaps there is an even faster way to use dstemr to get the SVD... please enlighten me if you know of a way
I have no idea which of these methods is fastest. Is there an even faster way to get the SVD of a real, bidiagonal matrix that I haven't listed above?
Ideally I am looking for a small example C code to demonstrate the fastest possible SVD for a relatively small real, bidiagonal matrix.

Code for finding eigen values

Hi I have been trying to code for finding eigenvalues of a n*n matrix. But I'm not able to think what should be the algorithm for it.
Step 1: Finding det(A-(lamda)*I) = 0
What should be the algorithm for a general matrix, for finding lamda?
I have written the code for finding determinant of a matrix, Can this be used in our algorithm.
Please Help. It will be really appreciated.
[Assuming your matrix is hermitian (simply put, symmetric) so the eigenvectors are real numbers]
In computation, you don't solve for the eignenvectors and eigenvalues using the determinant. It's too slow and unstable numerically.
What you do is apply a transformation (the householder reduction) to reduce your matrix to a tri-diagonal form.
After which, you apply what is known as the QL algorithm on that.
As a starting point, look at tred2 and tqli from numerical recipes (www.nr.com). These are the algorithms I've just described.
Note that these routines also recover candidate eigenvectors.

BLAS : Matrix product in C?

I would like to realize some fast operations in C language thanks to BLAS (no chance to choose another library, it is the only one available in my project).
I do the following operations:
Invert a square matrix,
Make a matrix product A*B where A is the computed inverse matrix and B a vector,
Sum two (very long) vectors.
I heard this kind of operations were possible with BLAS and were very fast. But I searched and found nothing (in C code lines, I mean) which could make me understand and apply it.
The BLAS library was written originally in Fortran. The interface to C is called CBLAS and has all functions prefixed by cblas_.
Unfortunately with BLAS you can only address directly the last two points:
sgemv (single precision) or dgemv (double precision) performs matrix-vector multiplication
saxpy (single precision) or daxpy (double precision) performs general vector-vector addition
BLAS does not deal with the more complex operation of inverting the matrix. For that there is the LAPACK library that builds on BLAS and provides linear algebra opertaions. General matrix inversion in LAPACK is done with sgetri (single precision) or dgetri (double precision), but there are other inversion routines that handle specific cases like symmetric matrices. If you are inverting the matrix only to multiply it later by a vector, that is essentially solving a system of linear equations and for that there are sgesv (single precision) and dgesv (double precision).
You can invert a matrix using BLAS operations only by essentially (re-)implementing one of the LAPACK routines.
Refer to one of the many BLAS/LAPACK implmentations for more details and examples, e.g. Intel MKL or ATLAS.
Do you really need to compute the full inverse? This is very rarely needed, very expensive, and error prone.
It is much more common to compute the inverse multiplied by a vector or matrix. This is very common, fairly cheap, and not error prone. You do not need to compute the inverse to multiply it by a vector.
If you want to compute Z = X^-1Y then you should look at the LAPACK driver routines. Usually Y in this case has only a few columns. If you really need to see all of X^-1 then you can set Y to be the full identity.
Technically, you can do what you are asking, but normally it is more stable to do:
triangular factorization, eg LU factorization, or Cholesky factorization
use a triangular solver on the factorized matrix
BLAS is quite capable of doing this. Technically, it's in 'LAPACK', but most/many BLAS implementations include LAPACK, eg OpenBLAS and Intel's MKL both do.
the function to do the LU factorization is dgetrf ("(d)ouble, (ge)neral matrix, (tr)iangular (f)actorization") http://www.netlib.org/lapack/double/dgetrf.f
then the solver is dtrsm ("(d)ouble (tr)iangular (s)olve (m)atrix") http://www.netlib.org/blas/dtrsm.f
Note that to call these from C, note that:
the function names should be in lowercase, with _ postfixed, ie dgetrf_ and dtrsm_
all the parameters should be pointers, eg int *m and double *a

Resources