I've used x=B/A (mrdivide) in Matlab to find x in equation xA=B. I am trying to achieve this without Matlab environment using a C based fixed point library for microcontrollers called libfixmatrix.
How would I proceed with using QR Decomposition and Solve function's of libfixmatrix to solve xA=B?
QR Decomposition and Solving is equivalent to solving for Ax=B. But I have a scenario where x is in equation xA=B
It was mentioned in the readme of the repository that :
Libfixmatrix is suited well for tasks involving small matrices (often
less than 10x10)
Is it efficient to use libfixmatrix for say 80*80 ?
just a silly suggestion:
x.A = B
x.A.inverse(A) = B.inverse(A)
x = B.inverse(A)
So you just need to compute Inverse matrix and matrix multiplication which are basic operations. Use Sub-determinant approach. Also if A,B are not square matrices resize them so they are.
Related
I am ultimately trying to create a pseudoinverse function using svd methodology. I want to first create a SVD function that will give me the U, E and V matrices that I will later use in the formula below to get the pseudoinverse:
I am not sure how to code these matrices. I understand how to do this by hand through eigen values and vectors but not sure how to translate that to c code.
I have already created functions for Transpose and matrix multiplication. Now its a matter of finding these 3 matrices.
You can see a similar asked question with a link to a source file.
They recommend to use openCV built-in function - you can find it here - the purpose is to use a ready function from the openCV library.
I'm trying to solve a large sparse 30,000x1,000 matrix using the a LAPACK solver in the Trilinos package. My goal is to minimise computation time however this Lapack solver only takes square matrices. So I'm manually converting my non-square matrix (A) into a square matrix by multiplying by its transpose, essentially solving the system like so:
(AT*A)x = AT*b
What is making my solve slow are the matrix multiplications by AT steps.
Any ideas on how to fix this?
You can directly compute the QR-decomposition (wikipedia-link) and use that to solve your least squares problem.
This way you don't have to compute the matrix product A^T A.
There are multiple LAPACK routines for the computation of QR-decompositions, e.g. gels is a driver routine for general matrices (intel-link).
I must say that I don't know if Trilinos supplies that routine but it is a standard LAPACK routine.
I need to solve a linear equation (Ax = b), with a large sparse A and with différents b multiple time. Thus, an LU (or Cholesky if A is symmetric), factorisation is highly preferable.
I'm using the armadillo library and I've heard that one can use the function:
spsolve(x, A, b, "superlu");
In order to solve such a system. I'm not very concerned about retrieving the L and U matrices. However, it is of primal importance that both L and U aren't recomputed every time I call spsolve.
Does spsolve(x, A, b, "superlu") store the LU decomposition and if not, is there a way to retrieve said matrices?
The fact is that armadillo is, essentially, a "wrap-up" for software developed by the other teams; this particularly is correct for the superlu sparse solver. The feature that you are asking for (solving a series of systems of equations with one and the same matrix but different right-hand sides) may not exist in armadillo at all. You should probably use a sparse linear solver directly (not necessarily superlu) that has that feature embedded. In case if you system is very large, so a factorisation-based solver cannot cope with that, an iterative solver may do, and there is an option to go in such case: since modern CPUs are multi-core, several independent solution processes can be run in parallel. One such iterative solver is described in the following blog (you may ask questions and/or participate in discussion there): http://comecau.blogspot.com/2018_09_05_archive.html
Hi I have been trying to code for finding eigenvalues of a n*n matrix. But I'm not able to think what should be the algorithm for it.
Step 1: Finding det(A-(lamda)*I) = 0
What should be the algorithm for a general matrix, for finding lamda?
I have written the code for finding determinant of a matrix, Can this be used in our algorithm.
Please Help. It will be really appreciated.
[Assuming your matrix is hermitian (simply put, symmetric) so the eigenvectors are real numbers]
In computation, you don't solve for the eignenvectors and eigenvalues using the determinant. It's too slow and unstable numerically.
What you do is apply a transformation (the householder reduction) to reduce your matrix to a tri-diagonal form.
After which, you apply what is known as the QL algorithm on that.
As a starting point, look at tred2 and tqli from numerical recipes (www.nr.com). These are the algorithms I've just described.
Note that these routines also recover candidate eigenvectors.
I am trying to find a program in C code that will allow me to compute a eigenvalue (spectral) decomposition for a square matrix. I am specifically trying to find code where the highest eigenvalue (and therefore its associated eigenvalue) are located int the first column.
The reason I need the output to be in this order is because I am trying to compute eigenvector centrality and therefore I only really need to calculate the eigenvector associated with the highest eigenvalue. Thanks in advance!
In any case I would recommend to use a dedicated linear algebra package like Lapack (Fortran but can be called from C) or CLapack. Both are free and offer routines for almost any eigenvalue problem. If the matrix is large it might be preferable to exploit its sparseness e.g. by using Arpack. All of these libraries tend to sort the eigenvectors according to the eigenvalues if they can (real or purely imaginary eigenvalues).
See the book "Numerical recipes in C"
And the #1 google hit (search: eigenvalue decomposition code C#)
http://crsouza.blogspot.com/2010/06/generalized-eigenvalue-decomposition-in.html
does not help?