When calling norm on a matrix in MATLAB, it returns what's known as a "matrix norm" (a scalar value), instead of an array of vector norms. Is there any way to obtain the norm of each vector in a matrix without looping and taking advantage of MATLAB's vectorization?
You can compute the norm of each column or row of a matrix yourself by using element-wise arithmetic operators and functions defined to operate over given matrix dimensions (like SUM and MAX). Here's how you could compute some column-wise norms for a matrix M:
twoNorm = sqrt(sum(abs(M).^2,1)); %# The two-norm of each column
pNorm = sum(abs(M).^p,1).^(1/p); %# The p-norm of each column (define p first)
infNorm = max(M,[],1); %# The infinity norm (max value) of each column
These norms can easily be made to operate on the rows instead of the columns by changing the dimension arguments from ...,1 to ...,2.
From version 2017b onwards, you can use vecnorm.
The existing implementation for the two-norm can be improved.
twoNorm = sqrt(sum(abs(M).^2,1)); # The two-norm of each column
abs(M).^2 is going to be calculating a whole bunch of unnecessary square roots which just get squared straightaway.
Far better to do:
twoNorm = sqrt(
sum( real(M .* conj(M)), 1 )
)
This efficiently handles real and complex M.
Using real() ensures that sum and sqrt act over real numbers (rather than complex numbers with 0 imaginary component).
Slight addition to P i's answer:
norm_2 = #(A,dim)sqrt( sum( real(A).*conj(A) , dim) )
allows for
B=magic([2,3])
norm_2( B , 1)
norm_2( B , 2)
or as this if you want a norm_2.m file:
function norm_2__ = norm_2 (A_,dim_)
norm_2__ = sqrt( sum( real(A_).*conj(A_) , dim_) ) ;
end
Related
I fail to understand why, in the below example, only x1 turns into a 1000 column array while y is a single number.
x = [0:1:999];
y = (7.5*(x))/(18000+(x));
x1 = exp(-((x)*8)/333);
Any clarification would be highly appreciated!
Why is x1 1x1000?
As given in the documentation,
exp(X) returns the exponential eˣ for each element in array X.
Since x is 1x1000, so -(x*8)/333 is 1x1000 and when exp() is applied on it, exponentials of all 1000 elements are computed and hence x1 is also 1x1000. As an example, exp([1 2 3]) is same as [exp(1) exp(2) exp(3)].
Why is y a single number?
As given in the documentation,
If A is a rectangular m-by-n matrix with m~= n, and B is a matrix
with n columns, then x = B/A returns a least-squares solution of the
system of equations x*A = B.
In your case,
A is 18000+x and size(18000+x) is 1x1000 i.e. m=1 and n=1000, and m~=n
and B is 7.5*x which has n=1000 columns.
⇒(7.5*x)/(18000+x) is returning you least-squares solution of equations x*(18000+x) = 7.5*x.
Final Remarks:
x = [0:1:999];
Brackets are unnecessary here and it should better be use like this: x=0:1:999 ;
It seems that you want to do element-wise division for computing x1 for which you should use ./ operator like this:
y=(7.5*x)./(18000+x); %Also removed unnecessary brackets
Also note that addition is always element-wise. .+ is not a valid MATLAB syntax (It works in Octave though). See valid arithmetic array and matrix operators in MATLAB here.
3. x1 also has some unnecessary brackets.
The question has already been answered by other people. I just want to point out a small thing. You do not need to write x = 0:1:999. It is better written as x = 0:999 as the default increment value used by MATLAB or Octave is 1.
Try explicitly specifying that you want to do element-wise operations rather than matrix operations:
y = (7.5.*(x))./(18000+(x));
In general, .* does elementwise multiplication, ./ does element-wise division, etc. So [1 2] .* [3 4] yields [3 8]. Omitting the dots will cause Matlab to use matrix operations whenever it can find a reasonable interpretation of your inputs as matrices.
Suppose I have an MxNx3 array A, where the first two indexes refer to the coordinates a point, and the last index (the number '3') refers to the three components of a vector. e.g. A[4,7,:] = [1,2,3] means that the vector at point (7,4) is (1,2,3).
Now I need to implement the following operations:
Lx = D*ux - (x-xo)
Ly = D*uy + (y-yo)
Lz = D
where D, ux, uy, xo, yo are all constants that are already known. Lx, Ly and Lz are the three components of the vector at each point (x,y) (note: x is the column index and y is the row index respectively). The biggest problem is about the x-xo and y-yo, as x and y are different for different points. So how to carry out these operations for an MxNx3 array efficiently, using vectorized code or some other fast methods?
thanks
You could use the meshgrid function from numpy:
import numpy as np
M=10
N=10
D=1
ux=0.5
uy=0.5
xo=1
yo=1
A=np.empty((M,N,3))
x=range(M)
y=range(N)
xv, yv = np.meshgrid(x, y, sparse=False, indexing='ij')
A[:,:,0]=D*ux - (xv-xo)
A[:,:,1]=D*uy - (yv-yo)
A[:,:,2]=D
If you want to operate on the X and Y values, you should include them in the matrix (or in other matrix) instead of relying in their indexes.
For that, you could use some of range creation routines from Numpy, specially numpy.mgrid.
I have an Nx3 array that contains N 3D points
a1 b1 c1
a2 b2 c2
....
aN bN cN
I want to calculate Euclidean distance in a NxN array that measures the Euclidean distance between each pair of 3D points. (i,j) in result array returns the distance between (ai,bi,ci) and (aj,bj,cj). Is it possible to write a code in matlab without loop ?
The challenge of your problem is to make a N*N matrix and the result should return in this matrix without using loops.
I overcome this challenge by giving suitable dimension to Bsxfun function. By default X and ReshapedX should have the same dimensions when we call bsxfun function. But if the size of the matrixes are not equal and one of them has a singleton (equal to 1) dimension, the matrix is virtually replicated along that dimension to match the other matrix. Therefore, it returns N*3*N matrix which provides subtraction of each 3D point from the others.
ReshapedX = permute(X,[3,2,1]);
DiffX = bsxfun(#minus,X,ReshapedX);
DistX =sqrt(sum(DiffX.^2,2));
D = squeeze(DistX);
Use pdist and squareform:
D = squareform( pdist(X, 'euclidean' ) );
For beginners, it can be a nice exercise to compute the distance matrix D using bsxfun (hover to see the solution).
elemDiff = bsxfun( #minus, permute(X,[ 1 3 2 ]), permute(X, [ 3 1 2 ]) );
D = sqrt( sum( elemDiff.^2, 3 ) );
To complete the comment of Divakar:
x = rand(10,3);
pdist2(x, x, 'euclidean')
In particular I'm interested in the summatory. It uses k two times, but using sum I don't know how to obtain the index.
Considering only the summatory:
summatory = sum( L(i, 1:j-1) * L(j, 1:j-1) );
is obviosly wrong.
How can I do it without a for loop?
That's an inner product between an 1x(j-1) vector and a (j-1)x1 vector:
krange = 1:j-1;
summatory = L(i, krange) * L(j, krange)';
Your code would also have worked (now that you've fixed the syntax), if you used the element-wise product operator .* instead of the matrix product *.
Either compute the inner product with vector algebra (i.e. v*v' as demonstrated by #BenVoigt), or use sum, but with the element-wise product (.*):
summatory = sum( L(i, 1:j-1) .* L(j, 1:j-1) );
I'm trying to come up with an algorithm that will allow me to generate a random N-dimensional real-valued vector that's linearly independent with respect to a set of already-generated vectors. I don't want to force them to be orthogonal, only linearly independent. I know Graham-Schmidt exists for the orthogonalization problem, but is there a weaker form that only gives you linearly independent vectors?
Step 1. Generate random vector vr.
Step 2. Copy vr to vo and update as follows: for every already generated vector v in v1, v2... vn, subtract the projection of vo on vi.
The result is a random vector orthogonal to the subspace spanned by v1, v2... vn. If that subspace is a basis, then it is the zero vector, of course :)
The decision of whether the initial vector was linearly independent can be made based on the comparison of the norm of vr to the norm of vo. Non-linearly independent vectors will have a vo-norm which is zero or nearly zero (some numerical precision issues may make it a small nonzero number on the order of a few times epsilon, this can be tuned in an application-dependent way).
Pseudocode:
vr = random_vector()
vo = vr
for v in (v1, v2, ... vn):
vo = vo - dot( vr, v ) / norm( v )
if norm(vo) < k1 * norm(vr):
# this vector was mostly contained in the spanned subspace
else:
# linearly independent, go ahead and use
Here k1 is a very small number, 1e-8 to 1e-10 perhaps?
You can also go by the angle between vr and the subspace: in that case, calculate it as theta = arcsin(norm(vo) / norm(vr)). Angles substantially different from zero correspond to linearly independent vectors.
A somewhat OTT scheme is to generate a NxN non-singular matrix, and use it's columns (or rows) as the N linearly independent vectors.
To generate a non=singular matrix one could generate it's SVD and multiply up. In more detail:
a/ generate a 'random' NxN orthogonal matrix U
b/ generate a 'random' NxN diagonal matrix S with positive numbers in the diagonal
c/ generate a 'random' NxN orthogonal matrix V
d/ compute
M = U*S*V'
To generate a 'random' orthogonal matrix U, one can use the fact that every orthogonal matrix can be written as a product of Household relectors, that is of matrices of the form
H(v) = I - 2*v*v'/(v'*v)
where v is a non zero random vector.
So one could
initialise U to I
for( i=1..N)
generate a none zero vector v
update: U := H(v)*U
Note that if all these matrix multiplications become burdonesome, one could write a special routine to do the update of U. Applying H(v) to a vector u is O(N):
u -> u - 2*(h'*u)/(h'*h) * h
and so applying H to U can be done in O(N squared) rather than O( N cubed)
One advantage of this scheme is that one has some control over 'how linearly independent' the vectors are. The product of the diagonal elements is (up to sign) the determinant of M, so that if this product is 'very small' the vectors are 'almost' linearly dependent