I need to solve a linear equation (Ax = b), with a large sparse A and with différents b multiple time. Thus, an LU (or Cholesky if A is symmetric), factorisation is highly preferable.
I'm using the armadillo library and I've heard that one can use the function:
spsolve(x, A, b, "superlu");
In order to solve such a system. I'm not very concerned about retrieving the L and U matrices. However, it is of primal importance that both L and U aren't recomputed every time I call spsolve.
Does spsolve(x, A, b, "superlu") store the LU decomposition and if not, is there a way to retrieve said matrices?
The fact is that armadillo is, essentially, a "wrap-up" for software developed by the other teams; this particularly is correct for the superlu sparse solver. The feature that you are asking for (solving a series of systems of equations with one and the same matrix but different right-hand sides) may not exist in armadillo at all. You should probably use a sparse linear solver directly (not necessarily superlu) that has that feature embedded. In case if you system is very large, so a factorisation-based solver cannot cope with that, an iterative solver may do, and there is an option to go in such case: since modern CPUs are multi-core, several independent solution processes can be run in parallel. One such iterative solver is described in the following blog (you may ask questions and/or participate in discussion there): http://comecau.blogspot.com/2018_09_05_archive.html
Related
I'm currently working on tranforming logical queries for database systems from a DNF form into a CNF form, focussed on queries which have a similar form to
(a and b and c and d and e1) or (a and b and c and d and e2) or (a and b and c and d and e3),
into
a and b and c and d and (e1 or e2 or e3)
I expected there to already be an algorithm for that, but I am currently working with z3 and the only way to form a CNF out of that I saw was with tseitin or similar, which includes extra variables, which are a problem as they can't be used for queries properly.
Using z3 for this purpose is probably not the best option. While there's an internal converter, it's tuned to be used in conjunction with the rest of the solver. If you don't have a need to find a satisfying model, you shouldn't be using a SAT/SMT solver.
Conversion from DNF to CNF can produce exponentially large formulas, and this is the reason why most practical applications will use Tseytin transformation to reduce this complexity; which adds extra variables. If you don't want this, you should use the Quine-McCluskey algorithm, which has worst case exponential behavior. But if your formulas are small enough to start with, it may not matter.
You can code this algorithm yourself, or if you're open to using SymPy, there's an existing implementation (with possible simplifications). See https://docs.sympy.org/latest/modules/logic.html?highlight=to_cnf#sympy.logic.boolalg.to_cnf
I am looking to find the fastest code/algorithm/package for obtaining the singular value decomposition (SVD) of a real, square bidiagonal matrix. The matrices I am working with are fairly small - typically somewhere around 15x15 in size. However, I am performing this SVD thousands (maybe millions) of times, so the computational cost becomes quite large.
I am writing all of my code in C. I assume the fastest codes for performing the SVD will probably come from LAPACK. I have been looking into LAPACK and it seems like I have quite a few different options for performing the SVD of real, bidiagonal matrices:
dbdsqr - uses a zero-shift QR algorithm to get the SVD of a bidiagonal matrix
dbdsdc - uses a divide and conquer algorithm to get the SVD of a bidiagonal matrix
dgesvd - not sure exactly how this one works, but it can handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dgesdd - not quire sure how this one works either, but it can also handle any arbitrary matrix, so I assume I am better of using dbdsqr or dbdsdc
dstemr - estimates eigenvectors/eigenvalues for a tridiagonal matrix; I can use it to estimate the left singular vectors/values by finding the eigenvectors/values of A*A'; I can then estimate the right singular vectors/values by finding the eigenvectors/values of A'*A
dstemr - perhaps there is an even faster way to use dstemr to get the SVD... please enlighten me if you know of a way
I have no idea which of these methods is fastest. Is there an even faster way to get the SVD of a real, bidiagonal matrix that I haven't listed above?
Ideally I am looking for a small example C code to demonstrate the fastest possible SVD for a relatively small real, bidiagonal matrix.
I have written a program that generates a few N x N matrices for which I would like to compute their determinant. Related to this I have two questions
What library is the best for doing so? I would like the fastest possible library since I have millions of such matrices.
Are there any specifics I should take care of when casting the result to an integer? All matrices that I will generate have integer determinants and I would like to make sure that no rounding error skews the correct value of the determinant.
Edit. If possible provide an example of computing the determinant for the recommended library.
As for Matrix Libraries, it looks like that question is answered here:
Recommendations for a small c-based vector and matrix library
As for casting to an integer: if the determinant is not an integer, then you shouldn't be casting it to an integer, you should be using round, floor, or ceil to convert it in an acceptable way. These will probably give you integral values, but you will still need to cast them; however, you will now be able to do so without fear of losing any information.
You can work wonders with matrices by blas and lapack. They are actually written in fortran and using them from "c" is kind of a tweak. but all in all they can crunch numbers in horrendous speed.
http://www.netlib.org/lapack/lug/node11.html
You have GSL, but the choice really depends on your matrices. Are the matrices dense or sparse? Is N big or small? For small Ns you may find that coding the determinant yourself using Cramer's rule or Gauss elimination is faster, since most hight performance libraries focus on big matrices and their optimisations may introduce overhead on simple problems.
I need to solve a large, sparse system of linear equations from a program written in D. Ideally I'd like a library with a D-style interface, but I doubt one exists. However, D can directly access C APIs. Therefore, please suggest some libraries that solve large, sparse systems of linear equations with the following characteristics:
Exposes a C API.
Free/open source. Preferably non-copyleft, too, but this is not a hard requirement.
Well-tested and debugged. Easy to set up and use. Not written by academics just to get a paper on their method and then completely unmaintained.
The classical library for sparse problem is suite-sparse. You have packages on many systems. Matlab uses it internally.
My bad, I tangle LAPACK that I used old time ago and ARPACK that I used more time ago.
Here is link to arpack http://www.caam.rice.edu/~kristyn/parpack_home.html:
The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices.
And here link with comparison of libaries for Linear Algebra:
http://www.netlib.org/utk/people/JackDongarra/la-sw.html
you can find there SparseLib++, mentioned here arpack and much more libaries in matrix form.
There is a dedicated package called CSPARSE, and it's written in C. It seems that the implemention is based on [david2006direct].
https://people.sc.fsu.edu/~jburkardt/c_src/csparse/csparse.html
Davis, T. A. (2006). Direct methods for sparse linear systems. Society for Industrial and Applied Mathematics.
I am trying to find a program in C code that will allow me to compute a eigenvalue (spectral) decomposition for a square matrix. I am specifically trying to find code where the highest eigenvalue (and therefore its associated eigenvalue) are located int the first column.
The reason I need the output to be in this order is because I am trying to compute eigenvector centrality and therefore I only really need to calculate the eigenvector associated with the highest eigenvalue. Thanks in advance!
In any case I would recommend to use a dedicated linear algebra package like Lapack (Fortran but can be called from C) or CLapack. Both are free and offer routines for almost any eigenvalue problem. If the matrix is large it might be preferable to exploit its sparseness e.g. by using Arpack. All of these libraries tend to sort the eigenvectors according to the eigenvalues if they can (real or purely imaginary eigenvalues).
See the book "Numerical recipes in C"
And the #1 google hit (search: eigenvalue decomposition code C#)
http://crsouza.blogspot.com/2010/06/generalized-eigenvalue-decomposition-in.html
does not help?