I have 3 vectors, named a, b and c and I want to create 3D matrix, M, so that M(i,j,k) = a(i) + b(j) + c(k), where a(i) means ith element of vector a, and likewise for all vectors and matrix.
For creating 2d matrix, it is easy like a+b'. but I am not sure how I can create 3d matrix.
You only need permute or reshape to do the multi-dimensional equivalent of transposition:
a + b.' + reshape(c, 1, 1, []);
Assuming that a, b, c are column vectors of sizes L×1, M×1 and N×1, this works because
a is L×1, or equivalently L×1×1;
b.' is 1×M×1;
reshape(c, 1, 1, [] is 1×1×N.
So, by implicit expansion the result is an L×M×N 3D array.
Related
I have a 3D matrix A of size 20x500x68. I have two vectors carrying information regarding this matrix which are:
B (containing zeros and ones) of size 1x68 and
C (containing numbers from 1 to 3) of size 1x68
(in length both B and C correspond to the third dimension of A).
I would like to create a "sub matrix" of A only of that third dimension where B==1 and C==3.
Schematically:
[sub matrix of A] = A (B = 1, C = 3)
Is there any way to do this without a loop?
SMA = A(:,:,B==1 & C==3)
%This submatrix contains all rows and columns of that third dimension of A
%where B equals 1 and C equals 3
Say I have the following matrix
B = [1 2 3;4 5 6;7 8 9;10 11 12]
and another matrix
A = [a b c;d e f;g h i]
How do I multiply each row of matrix B by the matrix A (without using a for loop), i.e.
for i = 1:4
c(i) = B(i,:)*A*B(i,:)'
end
many thanks in advance.
You can use:
c = diag(B*A*B.');
However, this computes a whole 4×4 matrix only to extract its diagonal, so it's not very efficient.
A more efficient way that only computes the desired values is:
c = sum(bsxfun(#times, permute(sum(bsxfun(#times, B, permute(A, [3 1 2])), 2), [1 3 2]), B), 2);
Here is a breakdown of the above code:
c1 = sum(bsxfun(#times, B, permute(A, [3 1 2])), 2); % B(i,:)*A
c = sum(bsxfun(#times, permute(c1, [1 3 2]), B), 2); % (B(i,:)*A)*B(i,:)'
The first permute is used so that the number of columns in B matches the number of columns in A. Following the element-wise multiplication in bsxfun() each row is summed up (remember, permute shifted the rows into the 2nd-dimension), reproducing the effect of the vector-matrix multiplication B(i,:) * A occurring in the for loop.
Following the first sum, the 2nd-dimension is a singleton dimension. So, we use the second permute to move the 2nd-dimension into the 3rd-dimension and produce a 2-D matrix. Now, both c1 and B are the same size. Following element-wise multiplication in the second bsxfun() each column is summed up (remember, permute shifted columns back into the 2nd-dimension), reproducing the effect of B(i,:) * A * B(i,:)'.
Take note of a hidden advantage in this approach. Since we are using element-wise multiplication to replicate the results of matrix multiplication, order of the arguments doesn't matter in the bsxfun() calls. One less thing to worry about!
Or, from Matlab R2016b onwards, you can replace bsxfun(#times,...) by .*, thanks to implicit expansion:
c = sum(permute(sum(B.*permute(A, [3 1 2]), 2), [1 3 2]).*B, 2);
I need to make an array of matrices in numpy. This is so I can treat them as scalars and dot this with another array, like so:
a = [1,2,3]
b = [A,B,C] #A, B, and C are matrices
result = a.dot(b) #1A + 2B + 3C
Or similarly with a matrix M such that:
M.dot(b) -> another array of matrices
Is there a way of doing this? Currently, every array-like thing gets subsumed into the numpy array outside that allows .dot() in the first place. So, if A, B, and C were 3x3 matrices, a would be:
a.shape -> (3,3,3) #matrices absorbed into array
Thanks.
Solution:
import numpy as np
a = np.array([1,2,3])
X= np.ones((3,3))
Y= np.ones((3,3))
Z= np.ones((3,3))
b = np.array([X,Y,Z], dtype=object)
print a.dot(b)
results in:
[[6.0 6.0 6.0]
[6.0 6.0 6.0]
[6.0 6.0 6.0]]
Remember that the length of array of scalars must be of the same size of matrices (in this case matrices 3x3, so you need an array of length 3)
I have a three dimensional domain in MATLAB. For each point in the domain I have defined three arrays of size (NX,NY,NZ) at each point of the domain:
A1; % size(A1) = [NX NY NZ]
A2; % size(A2) = [NX NY NZ]
A3; % size(A3) = [NX NY NZ]
For each element, I am trying to construct an array which holds the value of A1, A2, and A3. Would the following be a good candidate for having a 1×3 vector at each point?
B = [A1(:) A2(:) A3(:)];
B = reshape(B, [size(A1) 1 3]);
If the 1×3 array is named C, I am trying to find C'*C at each point.
C = [A1(i,j,k) A2(i,j,k) A3(i,j,k)]; % size(C) = [1 3]
D = C'*C; % size(D) = [3 3]
My ultimate goal is to find the array D with size 3×3 for all the points in the domain in a vectorize fashion? In fact, the output which consists of array D for each point will have the size [NX NY NZ 3 3]. Could someone help me?
Basically we concatenate A1, A2 and A3 along the 4th and 5th dimensions separately that leaves singleton dimensions in the 5th and 4th dimensions respectively, which are then used by bsxfun [Apply element-by-element binary operation to two arrays with singleton expansion enable] to expand as 3x3 matrices along the 4th-5th dimensions for matrix multiplication result from each triplet of [A1(i,j,k),A2(i,j,k),A3(i,j,k)].
D = bsxfun(#times,cat(4,A1,A2,A3),cat(5,A1,A2,A3));
I have 2 matrices A (nxm) and B (nxd) and want to multiply element-wise each column of A with a row of B. There are m columns in A and n 1xd vectors in B so the results are m nxd matrices. Then I want to sum(result_i, 1) to get m 1xd vectors, which I want to apply vertcat to get a mxd matrix. I'm doing this operations using for loop and it is slow because n and d are big. How can I vectorize this in matlab to make it faster? Thank you.
EDIT:
You're all right: I was confused by my own question. What I meant by "multiply element-wise each column of A with a row of B" is to multiply n elements of a column in A with the corresponding n rows of B. What I want to do with one column of A is as followed (and I repeat this for m columns of A, then vertcat the C's vector together to get an mxd matrix):
column_of_A =
3
3
1
B =
3 1 3 3
2 2 1 2
1 3 3 3
C = sum(diag(column_of_A)*B, 1)
16 12 15 18
You can vectorize your operation the following way. Note, however, that vectorizing comes at the cost of higher memory usage, so the solution may end up not working for you.
%# multiply nxm A with nx1xd B to create a nxmxd array
tmp = bsxfun(#times,A,permute(B,[1 3 2]));
%# sum and turn into mxd
out = squeeze(sum(tmp,1));
You may want to do everything in one line, which may help the Matlab JIT compiler to save on memory.
EDIT
Here's a way to replace the first line if you don't have bsxfun
[n,m] = size(A);
[n,d] = size(B);
tmp = repmat(A,[1 1 d]) .* repmat(permute(B,[1 3 2]),[1,m,1]);
It's ugly, but as far as I can see, it works. I'm not sure it will be faster than your loop though, plus, it has a large memory overhead. Anyway, here goes:
A_3D = repmat(reshape(A, size(A, 1), 1, size(A, 2)), 1, size(B, 2));
B_3D = repmat(B, [ 1 1 size(A, 2)]);
result_3D = sum(A_3D .* B_3D, 1);
result = reshape(result_3D, size(A, 2), size(B, 2))
What it does is: make A into a 3D matrix of size n x 1 x m, so one column in each index of the 3rd dimension. Then we repeat the matrix so we get an n x d x m matrix. We repeat B in the 3rd dimension as well. We then do a piecewise multiplication of all the elements and sum them. The resulting matrix is a 1 x d x m matrix. We reshape this into a m x d matrix.
I'm pretty sure I switched around the size of the dimensions a few times in my explanation, but I hope you get the general gist.
Multiplying with a diagonal matrix seems at least twice as fast, but I couldn't find a way to use diag, since it wants a vector or 2D matrix as input. I might try again later tonight, I feel there must be a faster way :).
[Edit] Split up the command in parts to at least make it a little bit readable.
This is the way I would do this:
sum(repmat(A,1,4).*B)
If you don't know the number of columns of B:
sum(repmat(A,1,size(B,2)).*B)