I have two n x k complex matrices A and B. I need to calculate C = A B^H. C is guaranteed to be real, symmetric and all elements will be non-negative. These matrices will be pretty large so I am going to be using either CBLAS or MKL in C for this purpose. However, I cannot find a way to tell BLAS that C is symmetric, at least based on the quick reference document. This is important as it would halve the computation time. The xHER2K routine will give me a B A^H term which I don't want. Please suggest a way to proceed in this matter.
Related
I need to solve a linear equation (Ax = b), with a large sparse A and with différents b multiple time. Thus, an LU (or Cholesky if A is symmetric), factorisation is highly preferable.
I'm using the armadillo library and I've heard that one can use the function:
spsolve(x, A, b, "superlu");
In order to solve such a system. I'm not very concerned about retrieving the L and U matrices. However, it is of primal importance that both L and U aren't recomputed every time I call spsolve.
Does spsolve(x, A, b, "superlu") store the LU decomposition and if not, is there a way to retrieve said matrices?
The fact is that armadillo is, essentially, a "wrap-up" for software developed by the other teams; this particularly is correct for the superlu sparse solver. The feature that you are asking for (solving a series of systems of equations with one and the same matrix but different right-hand sides) may not exist in armadillo at all. You should probably use a sparse linear solver directly (not necessarily superlu) that has that feature embedded. In case if you system is very large, so a factorisation-based solver cannot cope with that, an iterative solver may do, and there is an option to go in such case: since modern CPUs are multi-core, several independent solution processes can be run in parallel. One such iterative solver is described in the following blog (you may ask questions and/or participate in discussion there): http://comecau.blogspot.com/2018_09_05_archive.html
I am looking to find the fastest code/algorithm/package for obtaining the singular value decomposition (SVD) of a real, square bidiagonal matrix. The matrices I am working with are fairly small - typically somewhere around 15x15 in size. However, I am performing this SVD thousands (maybe millions) of times, so the computational cost becomes quite large.
I am writing all of my code in C. I assume the fastest codes for performing the SVD will probably come from LAPACK. I have been looking into LAPACK and it seems like I have quite a few different options for performing the SVD of real, bidiagonal matrices:
dbdsqr - uses a zero-shift QR algorithm to get the SVD of a bidiagonal matrix
dbdsdc - uses a divide and conquer algorithm to get the SVD of a bidiagonal matrix
dgesvd - not sure exactly how this one works, but it can handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dgesdd - not quire sure how this one works either, but it can also handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dstemr - estimates eigenvectors/eigenvalues for a tridiagonal matrix; I can use it to estimate the left singular vectors/values by finding the eigenvectors/values of A*A'; I can then estimate the right singular vectors/values by finding the eigenvectors/values of A'*A
dstemr - perhaps there is an even faster way to use dstemr to get the SVD... please enlighten me if you know of a way
I have no idea which of these methods is fastest. Is there an even faster way to get the SVD of a real, bidiagonal matrix that I haven't listed above?
Ideally I am looking for a small example C code to demonstrate the fastest possible SVD for a relatively small real, bidiagonal matrix.
I'm using the GNU Scientific Library in implementing a calculator which needs to be able to raise matrices to powers. Unfortunately, there doesn't seem to be such a function available in the GSL for even multiplying matrices (the gsl_matrix_mul_elements() function only multiplies using the addition process), and by extension, no raising to powers.
I want to be able to raise to negative powers, which requires the ability to take the inverse. From my searching around, I have not been able to find any sound code for calculating the inverses of arbitrary matrices (only ones with defined dimensions), and the guides I found for doing it by hand use clever "on-paper tricks" that don't really work in code.
Is there a common algorithm that can be used to compute the inverse of a matrix of any size (failing of course when the inverse cannot be calculated)?
As mentioned in the comments, power of matrices can be computed for square matrices for integer exponents. The n power of A is A^n = A*A*...A where A appears n times. If B is the inverse of A, then the -n power of A is A^(-n) = (A^-1)^n = B^n = B*B*...B.
So in order to compute the n power of A I can suggest the following algorithm using GSL:
gsl_matrix_set_identity(); // initialize An as I
for(i=0;i<n;i++) gsl_blas_dgemm(); // compute recursive product of A
For computing B matrix you can use the following routine
gsl_linalg_LU_decomp(); // compute A decomposition
gsl_linalg_complex_LU_invert // comput inverse from decomposition
I recommend reading up about the SVD, which the gsl implements. If your matrix is invertible, then computing it via the SVD is a not bad, though somewhat slow, way to go. If your matrix is not invertible, the SVD allows you to compute the next best thing, the generalised inverse.
In matrix calculations the errors inherent in floating point arithmetic can accumulate alarmingly. One example is the Hilbert matrix, an innocent looking thing with a remarkably large condition number, even for quite moderate dimension. A good test of an inversion routine is to see how big a Hilbert matrix it can invert, and how close the computed inverse times the matrix is to the identity.
I have written a program that generates a few N x N matrices for which I would like to compute their determinant. Related to this I have two questions
What library is the best for doing so? I would like the fastest possible library since I have millions of such matrices.
Are there any specifics I should take care of when casting the result to an integer? All matrices that I will generate have integer determinants and I would like to make sure that no rounding error skews the correct value of the determinant.
Edit. If possible provide an example of computing the determinant for the recommended library.
As for Matrix Libraries, it looks like that question is answered here:
Recommendations for a small c-based vector and matrix library
As for casting to an integer: if the determinant is not an integer, then you shouldn't be casting it to an integer, you should be using round, floor, or ceil to convert it in an acceptable way. These will probably give you integral values, but you will still need to cast them; however, you will now be able to do so without fear of losing any information.
You can work wonders with matrices by blas and lapack. They are actually written in fortran and using them from "c" is kind of a tweak. but all in all they can crunch numbers in horrendous speed.
http://www.netlib.org/lapack/lug/node11.html
You have GSL, but the choice really depends on your matrices. Are the matrices dense or sparse? Is N big or small? For small Ns you may find that coding the determinant yourself using Cramer's rule or Gauss elimination is faster, since most hight performance libraries focus on big matrices and their optimisations may introduce overhead on simple problems.
I would like to realize some fast operations in C language thanks to BLAS (no chance to choose another library, it is the only one available in my project).
I do the following operations:
Invert a square matrix,
Make a matrix product A*B where A is the computed inverse matrix and B a vector,
Sum two (very long) vectors.
I heard this kind of operations were possible with BLAS and were very fast. But I searched and found nothing (in C code lines, I mean) which could make me understand and apply it.
The BLAS library was written originally in Fortran. The interface to C is called CBLAS and has all functions prefixed by cblas_.
Unfortunately with BLAS you can only address directly the last two points:
sgemv (single precision) or dgemv (double precision) performs matrix-vector multiplication
saxpy (single precision) or daxpy (double precision) performs general vector-vector addition
BLAS does not deal with the more complex operation of inverting the matrix. For that there is the LAPACK library that builds on BLAS and provides linear algebra opertaions. General matrix inversion in LAPACK is done with sgetri (single precision) or dgetri (double precision), but there are other inversion routines that handle specific cases like symmetric matrices. If you are inverting the matrix only to multiply it later by a vector, that is essentially solving a system of linear equations and for that there are sgesv (single precision) and dgesv (double precision).
You can invert a matrix using BLAS operations only by essentially (re-)implementing one of the LAPACK routines.
Refer to one of the many BLAS/LAPACK implmentations for more details and examples, e.g. Intel MKL or ATLAS.
Do you really need to compute the full inverse? This is very rarely needed, very expensive, and error prone.
It is much more common to compute the inverse multiplied by a vector or matrix. This is very common, fairly cheap, and not error prone. You do not need to compute the inverse to multiply it by a vector.
If you want to compute Z = X^-1Y then you should look at the LAPACK driver routines. Usually Y in this case has only a few columns. If you really need to see all of X^-1 then you can set Y to be the full identity.
Technically, you can do what you are asking, but normally it is more stable to do:
triangular factorization, eg LU factorization, or Cholesky factorization
use a triangular solver on the factorized matrix
BLAS is quite capable of doing this. Technically, it's in 'LAPACK', but most/many BLAS implementations include LAPACK, eg OpenBLAS and Intel's MKL both do.
the function to do the LU factorization is dgetrf ("(d)ouble, (ge)neral matrix, (tr)iangular (f)actorization") http://www.netlib.org/lapack/double/dgetrf.f
then the solver is dtrsm ("(d)ouble (tr)iangular (s)olve (m)atrix") http://www.netlib.org/blas/dtrsm.f
Note that to call these from C, note that:
the function names should be in lowercase, with _ postfixed, ie dgetrf_ and dtrsm_
all the parameters should be pointers, eg int *m and double *a