Eigenvector (Spectral) Decomposition - c

I am trying to find a program in C code that will allow me to compute a eigenvalue (spectral) decomposition for a square matrix. I am specifically trying to find code where the highest eigenvalue (and therefore its associated eigenvalue) are located int the first column.
The reason I need the output to be in this order is because I am trying to compute eigenvector centrality and therefore I only really need to calculate the eigenvector associated with the highest eigenvalue. Thanks in advance!

In any case I would recommend to use a dedicated linear algebra package like Lapack (Fortran but can be called from C) or CLapack. Both are free and offer routines for almost any eigenvalue problem. If the matrix is large it might be preferable to exploit its sparseness e.g. by using Arpack. All of these libraries tend to sort the eigenvectors according to the eigenvalues if they can (real or purely imaginary eigenvalues).

See the book "Numerical recipes in C"

And the #1 google hit (search: eigenvalue decomposition code C#)
http://crsouza.blogspot.com/2010/06/generalized-eigenvalue-decomposition-in.html
does not help?

Related

Julia: all eigenvalues of large sparse matrix

I have a large sparse matrix, for example, 128000×128000 SparseMatrixCSC{Complex{Float64},Int64} with 1376000 stored entries.
How to quickly get all eigenvalues of the sparse matrix ? Is it possible ?
I tried eigs for 128000×128000 with 1376000 stored entries but the kernel was dead.
I use a mac book pro with 16GB memory and Julia 1.3.1 on jupyter notebook.
As far as I'm aware (and I would love to be proven wrong) there is no efficient way to get all the eigenvalues of a general sparse matrix.
The main algorithm to compute the eigenvalues of a matrix is the QR algorithm. The first step of the QR algorithm is to reduce the matrix to a Hessenberg form (in order to do the QR factorisations in O(n) time). The problem is that reducing a matrix to Hessenberg form destroys the sparsity and you just end up with a dense matrix.
There are also other methods to compute the eigenvalues of a matrix like the (inverse) power iteration, that only require matrix vector products and solving linear systems, but these only give you the largest or smallest eigenvalues, and they become expensive when you want to compute all the eigenvalues (they require storing the eigenvectors for the "deflation").
So that was in general, now if your matrix has some special structure there may some better alternatives. For example, if your matrix is symmetric, then its Hessenberg form is tridiagonal and you can compute all the eigenvalues pretty fast.
TLDR: Is it possible ? — in general, no.
P.S: I tried to keep this short but if you're interested I can give you more details on any part of the answer.

armadillo sparse lu (or cholesky) decomposition

I need to solve a linear equation (Ax = b), with a large sparse A and with différents b multiple time. Thus, an LU (or Cholesky if A is symmetric), factorisation is highly preferable.
I'm using the armadillo library and I've heard that one can use the function:
spsolve(x, A, b, "superlu");
In order to solve such a system. I'm not very concerned about retrieving the L and U matrices. However, it is of primal importance that both L and U aren't recomputed every time I call spsolve.
Does spsolve(x, A, b, "superlu") store the LU decomposition and if not, is there a way to retrieve said matrices?
The fact is that armadillo is, essentially, a "wrap-up" for software developed by the other teams; this particularly is correct for the superlu sparse solver. The feature that you are asking for (solving a series of systems of equations with one and the same matrix but different right-hand sides) may not exist in armadillo at all. You should probably use a sparse linear solver directly (not necessarily superlu) that has that feature embedded. In case if you system is very large, so a factorisation-based solver cannot cope with that, an iterative solver may do, and there is an option to go in such case: since modern CPUs are multi-core, several independent solution processes can be run in parallel. One such iterative solver is described in the following blog (you may ask questions and/or participate in discussion there): http://comecau.blogspot.com/2018_09_05_archive.html

Fastest way to obtain the singular value decomposition of a bidiagonal matrix (using LAPACK)?

I am looking to find the fastest code/algorithm/package for obtaining the singular value decomposition (SVD) of a real, square bidiagonal matrix. The matrices I am working with are fairly small - typically somewhere around 15x15 in size. However, I am performing this SVD thousands (maybe millions) of times, so the computational cost becomes quite large.
I am writing all of my code in C. I assume the fastest codes for performing the SVD will probably come from LAPACK. I have been looking into LAPACK and it seems like I have quite a few different options for performing the SVD of real, bidiagonal matrices:
dbdsqr - uses a zero-shift QR algorithm to get the SVD of a bidiagonal matrix
dbdsdc - uses a divide and conquer algorithm to get the SVD of a bidiagonal matrix
dgesvd - not sure exactly how this one works, but it can handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dgesdd - not quire sure how this one works either, but it can also handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dstemr - estimates eigenvectors/eigenvalues for a tridiagonal matrix; I can use it to estimate the left singular vectors/values by finding the eigenvectors/values of A*A'; I can then estimate the right singular vectors/values by finding the eigenvectors/values of A'*A
dstemr - perhaps there is an even faster way to use dstemr to get the SVD... please enlighten me if you know of a way
I have no idea which of these methods is fastest. Is there an even faster way to get the SVD of a real, bidiagonal matrix that I haven't listed above?
Ideally I am looking for a small example C code to demonstrate the fastest possible SVD for a relatively small real, bidiagonal matrix.

Fast factorization of polynomial with integers coefficients

I want to fast decompose polynomial over ring of integers (original polynomial has integer coefficients and all of factors have integer coefficients).
For example I want to decompose 4*x^6 + 20*x^5 + 29*x^4 - 14*x^3 - 71*x^2 - 48*x as (2*x^4 + 7*x^3 + 4*x^2 - 13*x - 16)*(2*x + 3)*x.
Which algorithm should I pick to avoid complexity of code and inefficiency of approach (speaking about total amount of arithmetic operations and memory consumption)?
I'm going to use the C programming language.
For example, maybe there are some good algorithms for polynomial factorization over ring of integers modulo prime number?
Since Sage is free and open source, you should be able to find the algorithm that Sage uses and then call it or at worst re-implement it in C. However, if you really must write a procedure from scratch, this is what I would do: First find the gcd of all the coefficients and divide that out, which makes your polynomial "content free". Then take the derivative and find the polynomial gcd of the original polynomial and its derivative. Take that factor out of the original polynomial by polynomial division, which breaks your problem into two parts: factoring a content-free, square free polynomial (p/gcd(p,p')), and factoring another polynomial (gcd(p,p')) which may not be square free. For the latter, start over at the beginning, until you have reduced the problem to factoring one or more content-free, square-free polynomials.
The next step would be to implement a factoring algorithm mod p. Berlekamp's algorithm is probably easiest, although Cantor-Zassenhaus is state of the art.
Finally, apply Zassenhaus algorithm to factor over the integers. If you find it is too slow, it can be improved using the "Lenstra-Lenstra-Lovasz lattice basis reduction algorithm". http://en.wikipedia.org/wiki/Factorization_of_polynomials#Factoring_univariate_polynomials_over_the_integers
As you can see, this is all rather complicated and depends on a great deal of theory from abstract algebra. You're much better off using the same library that Sage uses, or re-implementing the Sage implementation, or even just calling a running version of the Sage kernel from within your program.
According to this answer on mathoverflow, Sage uses FLINT to do factorisation.
FLINT (Fast Library for Number Theory) is a C library in support of
computations in number theory. It's also a research project into
algorithms in number theory.
So it is possible to look and even use realisation of decomposition algorithms in that library which is well-tested and stable.

Code for finding eigen values

Hi I have been trying to code for finding eigenvalues of a n*n matrix. But I'm not able to think what should be the algorithm for it.
Step 1: Finding det(A-(lamda)*I) = 0
What should be the algorithm for a general matrix, for finding lamda?
I have written the code for finding determinant of a matrix, Can this be used in our algorithm.
Please Help. It will be really appreciated.
[Assuming your matrix is hermitian (simply put, symmetric) so the eigenvectors are real numbers]
In computation, you don't solve for the eignenvectors and eigenvalues using the determinant. It's too slow and unstable numerically.
What you do is apply a transformation (the householder reduction) to reduce your matrix to a tri-diagonal form.
After which, you apply what is known as the QL algorithm on that.
As a starting point, look at tred2 and tqli from numerical recipes (www.nr.com). These are the algorithms I've just described.
Note that these routines also recover candidate eigenvectors.

Resources