arithmetic operations with arrays of unequal size in matlab - arrays

I am attempting to calculate a 2D polar plot of the power pattern of an Archimedean spiral array of isotropic antennas in MATLAB. This requires a theta and phi array that are both different sizes.
The code I am using is as follows:
clear all
close all
clc
r=27000;
phi=0:.02:2*pi;
theta=0:.02:pi/2;
px=r.*cos(phi);
py=r.*sin(phi);
wL=1;
k=(2*pi)/wL;
d=px.*sin(theta).*cos(phi)+py.*sin(theta).*sin(phi);
phase=0;
psi=k.*d+phase;
N=27;
PowPatt=(sin(N.*(psi./2))./(N.*sin(psi./2))).^2;
polar(psi,PowPatt)
Naturally I get the following error:
??? Error using ==> times
Matrix dimensions must agree.
Error in ==> powerpattern at 11
d=px.*sin(theta).*cos(phi)+py.*sin(theta).*sin(phi);
Is there anyway to change my code to perform the arithmetic operations on the theta and phi arrays?
Thank you.
Patrick

The code itself doesn't work because theta and phi are different sizes. They need to be the same size to do the element-wise multiply ".*" operation. Or, they need to be compatible sizes to do a matrix-wise multiply "*". You don't have either, so your problem is not with the code but with your interpretation of the antenna array equations. We can't help you there.
I'm guessing that the array equations intend for you to either permute across all combinations of phi and theta so that you can then sum over one or more dimensions of phi and theta. Therefore, you're likely going to have to introduce a for loop and/or a sum() operation. Since I don't know your antenna array equations, I can't help. You'll have to go back and look at the physics. Once you understand the array antenna equations better, you'll be able to code them better.

Related

counterintuitive speed difference between LM and shift-invert modes in scipy.sparse.linalg.eigsh?

I'm trying to find the smallest (as in most negative, not lowest magnitude) several eigenvalues of a list of sparse Hermitian matrices in Python using scipy.sparse.linalg.eigsh. The matrices are ~1000x1000, and the list length is ~500-2000. In addition, I know upper and lower bounds on the eigenvalues of all the matrices -- call them eig_UB and eig_LB, respectively.
I've tried two methods:
Using shift-invert mode with sigma=eig_LB.
Subtracting eig_UB from the diagonal of each matrix (thus shifting the smallest eigenvalues to be the largest magnitude eigenvalues), diagonalizing the resulting matrices with default eigsh settings (no shift-invert mode and using which='LM'), and then adding eig_UB to the resulting eigenvalues.
Both methods work and their results agree, but method 1 is around 2-2.5x faster. This seems counterintuitive, since (at least as I understand the eigsh documentation) shift-invert mode subtracts sigma from the diagonal, inverts the matrix, and then finds eigenvalues, whereas default mode directly finds the largest magnitude eigenvalues. Does anyone know what could explain the difference in performance?
One other piece of information: I've checked, and the matrices that result from shift-inverting (that is, (M-sigma*identity)^(-1) if M is the original matrix) are no longer sparse, which seems like it should make finding their eigenvalues take even longer.
This is probably resolved. As pointed out in https://arxiv.org/pdf/1504.06768.pdf, you don't actually need to invert the shifted sparse matrix and then repeatedly apply it in some Lanczos-type method -- you just need to repeatedly solve an inverse problem (M-sigma*identity)*v(n+1)=v(n) to generate a sequence of vectors {v(n)}. This inverse problem can be done quickly for a sparse matrix after LU decomposition.

How to calculate the size of a vector of this form?

In Matlab, it's easy to define a vector this way:
x = a:b:c, where a,b,c are real numbers, a < c and b <= c - a.
My problem is that I'm having troubles trying to define a formula to calculate the number of elements in x.
I know that the problem is solved using the size command, but I need a formula because I'm doing a version of a Matlab program (which uses vectors this way), in another language.
Thanks in advance for any help you can provide.
Best regards,
VĂ­ctor
On a mathematical level you could argue that all of these expressions return the same:
size(a:b:c)
size(a/b:c/b)
size(0:c/b-a/b)
Now you end up with integers from 0 to that term, which is:
floor((c-a)/b+1)
There is one problem: Floating point precision. The colon operator does repeated summing, don't know any possibility to predict reproduce that.

Inverting a matrix of any size

I'm using the GNU Scientific Library in implementing a calculator which needs to be able to raise matrices to powers. Unfortunately, there doesn't seem to be such a function available in the GSL for even multiplying matrices (the gsl_matrix_mul_elements() function only multiplies using the addition process), and by extension, no raising to powers.
I want to be able to raise to negative powers, which requires the ability to take the inverse. From my searching around, I have not been able to find any sound code for calculating the inverses of arbitrary matrices (only ones with defined dimensions), and the guides I found for doing it by hand use clever "on-paper tricks" that don't really work in code.
Is there a common algorithm that can be used to compute the inverse of a matrix of any size (failing of course when the inverse cannot be calculated)?
As mentioned in the comments, power of matrices can be computed for square matrices for integer exponents. The n power of A is A^n = A*A*...A where A appears n times. If B is the inverse of A, then the -n power of A is A^(-n) = (A^-1)^n = B^n = B*B*...B.
So in order to compute the n power of A I can suggest the following algorithm using GSL:
gsl_matrix_set_identity(); // initialize An as I
for(i=0;i<n;i++) gsl_blas_dgemm(); // compute recursive product of A
For computing B matrix you can use the following routine
gsl_linalg_LU_decomp(); // compute A decomposition
gsl_linalg_complex_LU_invert // comput inverse from decomposition
I recommend reading up about the SVD, which the gsl implements. If your matrix is invertible, then computing it via the SVD is a not bad, though somewhat slow, way to go. If your matrix is not invertible, the SVD allows you to compute the next best thing, the generalised inverse.
In matrix calculations the errors inherent in floating point arithmetic can accumulate alarmingly. One example is the Hilbert matrix, an innocent looking thing with a remarkably large condition number, even for quite moderate dimension. A good test of an inversion routine is to see how big a Hilbert matrix it can invert, and how close the computed inverse times the matrix is to the identity.

Code for finding eigen values

Hi I have been trying to code for finding eigenvalues of a n*n matrix. But I'm not able to think what should be the algorithm for it.
Step 1: Finding det(A-(lamda)*I) = 0
What should be the algorithm for a general matrix, for finding lamda?
I have written the code for finding determinant of a matrix, Can this be used in our algorithm.
Please Help. It will be really appreciated.
[Assuming your matrix is hermitian (simply put, symmetric) so the eigenvectors are real numbers]
In computation, you don't solve for the eignenvectors and eigenvalues using the determinant. It's too slow and unstable numerically.
What you do is apply a transformation (the householder reduction) to reduce your matrix to a tri-diagonal form.
After which, you apply what is known as the QL algorithm on that.
As a starting point, look at tred2 and tqli from numerical recipes (www.nr.com). These are the algorithms I've just described.
Note that these routines also recover candidate eigenvectors.

Eigen Sparse Matrix

I am trying to multiply two large sparse matrices of size 300k * 1000k and 1000k*300k using Eigen. The matrices are highly sparse ~0.01% non zero entries, however there's no block or other structure in their sparsity.
It turns out that Eigen chokes and ends up taking 55-60G of memory. Actually, it makes the final matrix dense which explains why it takes so much memory.
I have tried multiplying matrices of similar sizes when one of the matrix is diagonal and the multiplication works fine, with ~2-3 G of memory.
Any thoughts on whats going wrong?
Even though your matrices are sparse, the result might be completely dense. You can try to remove smallest entries with (A*B).prune(ref,eps); where ref is a reference value for what is not a zero and eps is a tolerance value. Basically, all entries smaller than ref*eps will be removed during the computation of the product, thus reducing both the memory usage and size of the result. A better option would be to find a way to avoid performing this product.

Resources