QUESTION
I'm looking for an elegant way to multiply two arrays along one particular dimension.
SIMILAR QUESTION
There is already a similar question on the official matlab forum, but the thread is outdated (2004).
EXAMPLE
M1 a [6x4x4] matrix and M2 a [6x1] matrix, I would like to multiply (element by element) M1 with M2 along the 3rd dimension of M1 to obtain a matrix M [6x4x4]
An equivalent to:
M1 = rand(6,4,4);
M2 = rand(6,1);
for ii = 1:size(M1,2)
for jj = 1:size(M1,3)
M(:,ii,jj) = M1(:,ii,jj).*M2;
end
end
VISUAL EXAMPLE
Do you know a cool way to do that ? (no loop, 1 or 2 lines solution,...)
If I'm interpreting your question correctly, you want to take each temporal slice (i.e. 1 x 1 x n) at each spatial location in M1 and element-wise multiply it with a vector M2 of size n x 1. bsxfun and permute are perfect for that situation:
M = bsxfun(#times, M1, permute(M2, [2 3 1]));
Related
For my project I need to save vectors in a matrix, thus creating a multidimensional array (3D-Matrix).
Now I'm wondering on how to access my vectors.
Lets say I have a lot of vectors stored in an array c. I could access all vectors with c(i,:).
I can also perform vector operations and use buit in fuctions like norm(c(1,:))and it gives me the absolute value of the vector. Everythings fine
Now if I store a vector v in a 2D-matrix M, i can still access every element of the vector, but M(i,j,:) doesnt give me the output [vx;vy;vz]I'm looking for. Instead matlab gives three outputs resulting in problems using the built in vector operations.
Is there any way around this?
Or do I have to implement my own functions to operate on a 3d-matrix?
Edit:
If z is a vector, the output is: z = [zx zy zz]
If this vetor is stored in a 2x2x3 matrix M(2,2,3), lets say in M(1,1), the output when accessing the vector by M(1,1,:)isnt [zx zy zz].
Instead the output is: M(:,:,1) = zx M(:,:,2) = zy M(:,:,3) = zz
Thanks for pointing out to change the direction the vector is stored in the matrix.
M(i,j,:) returns a 1×1×3 array, which Matlab understand is not exactly a vector v as you want, but a three-dimensional array, which happens to have only one element with respect to the first and second indices.
You can easily remove the dimensions of length 1 with the built-in function squeeze:
A = zeros(1,1,3);
A(:,:,1:3) = [1 2 3]
A =
A(:,:,1) =
1
A(:,:,2) =
2
A(:,:,3) =
3
B = squeeze(A)
B = 3×1
1
2
3
I would like to compute the product of the next n adjacent elements of a matrix. The number n of elements to be multiplied should be given in function's input.
For example for this input I should compute the product of every 3 consecutive elements, starting from the first.
[p, ind] = max_product([1 2 2 1 3 1],3);
This gives [1*2*2, 2*2*1, 2*1*3, 1*3*1] = [4,4,6,3].
Is there any practical way to do it? Now I do this using:
for ii = 1:(length(v)-2)
p = prod(v(ii:ii+n-1));
end
where v is the input vector and n is the number of elements to be multiplied.
in this example n=3 but can take any positive integer value.
Depending whether n is odd or even or length(v) is odd or even, I get sometimes right answers but sometimes an error.
For example for arguments:
v = [1.35912281237829 -0.958120385352704 -0.553335935098461 1.44601450110386 1.43760259196739 0.0266423803393867 0.417039432979809 1.14033971399183 -0.418125096873537 -1.99362640306847 -0.589833539347417 -0.218969651537063 1.49863539349242 0.338844452879616 1.34169199365703 0.181185490389383 0.102817336496793 0.104835620599133 -2.70026800170358 1.46129128974515 0.64413523430416 0.921962619821458 0.568712984110933]
n = 7
I get the error:
Index exceeds matrix dimensions.
Error in max_product (line 6)
p = prod(v(ii:ii+n-1));
Is there any correct general way to do it?
Based on the solution in Fast numpy rolling_product, I'd like to suggest a MATLAB version of it, which leverages the movsum function introduced in R2016a.
The mathematical reasoning is that a product of numbers is equal to the exponent of the sum of their logarithms:
A possible MATLAB implementation of the above may look like this:
function P = movprod(vec,window_sz)
P = exp(movsum(log(vec),[0 window_sz-1],'Endpoints','discard'));
if isreal(vec) % Ensures correct outputs when the input contains negative and/or
P = real(P); % complex entries.
end
end
Several notes:
I haven't benchmarked this solution, and do not know how it compares in terms of performance to the other suggestions.
It should work correctly with vectors containing zero and/or negative and/or complex elements.
It can be easily expanded to accept a dimension to operate along (for array inputs), and any other customization afforded by movsum.
The 1st input is assumed to be either a double or a complex double row vector.
Outputs may require rounding.
Update
Inspired by the nicely thought answer of Dev-iL comes this handy solution, which does not require Matlab R2016a or above:
out = real( exp(conv(log(a),ones(1,n),'valid')) )
The basic idea is to transform the multiplication to a sum and a moving average can be used, which in turn can be realised by convolution.
Old answers
This is one way using gallery to get a circulant matrix and indexing the relevant part of the resulting matrix before multiplying the elements:
a = [1 2 2 1 3 1]
n = 3
%// circulant matrix
tmp = gallery('circul', a(:))
%// product of relevant parts of matrix
out = prod(tmp(end-n+1:-1:1, end-n+1:end), 2)
out =
4
4
6
3
More memory efficient alternative in case there are no zeros in the input:
a = [10 9 8 7 6 5 4 3 2 1]
n = 2
%// cumulative product
x = [1 cumprod(a)]
%// shifted by n and divided by itself
y = circshift( x,[0 -n] )./x
%// remove last elements
out = y(1:end-n)
out =
90 72 56 42 30 20 12 6 2
Your approach is correct. You should just change the for loop to for ii = 1:(length(v)-n+1) and then it will work fine.
If you are not going to deal with large inputs, another approach is using gallery as explained in #thewaywewalk's answer.
I think the problem may be based on your indexing. The line that states for ii = 1:(length(v)-2) does not provide the correct range of ii.
Try this:
function out = max_product(in,size)
size = size-1; % this is because we add size to i later
out = zeros(length(in),1) % assuming that this is a column vector
for i = 1:length(in)-size
out(i) = prod(in(i:i+size));
end
Your code works when restated like so:
for ii = 1:(length(v)-(n-1))
p = prod(v(ii:ii+(n-1)));
end
That should take care of the indexing problem.
using bsxfun you create a matrix each row of it contains consecutive 3 elements then take prod of 2nd dimension of the matrix. I think this is most efficient way:
max_product = #(v, n) prod(v(bsxfun(#plus, (1 : n), (0 : numel(v)-n)')), 2);
p = max_product([1 2 2 1 3 1],3)
Update:
some other solutions updated, and some such as #Dev-iL 's answer outperform others, I can suggest fftconv that in Octave outperforms conv
If you can upgrade to R2017a, you can use the new movprod function to compute a windowed product.
I have an n x p matrix that looks like this:
n = 100
p = 10
x <- matrix(sample(c(0,1), size = p*n, replace = TRUE), n, p)
I want to create an n x p x p array A whose kth item along the 1st dimension is a p x p diagonal matrix containing the elements of x[k,]. What is the most efficient way to do this in R? I'm looking for a way that uses outer (or some other vectorized approach) rather than one of the apply functions.
Solution using lapply:
A <- aperm(simplify2array(lapply(1:nrow(x), function(i) diag(x[i,]))), c(3,2,1))
I'm looking for something more efficient than this.
Thanks.
As a starting point, here is a humble for loop method with pre-allocation of the matrix.
# pre-allocate matrix of desired size
myArray <- array(0, dim=c(ncol(x), ncol(x), nrow(x)))
# fill in array
for(i in seq_len(nrow(x))) myArray[,,i] <- diag(x[i,])
It should run relatively fast. On my machine, for a 1000 X 100 matrix, the lapply method took 0.87 seconds, while the for loop (including the array pre-allocation) took 0.25 seconds to transform the matrix into to your desired array. So the for loop was about 3.5 times faster.
transpose your original matrix
Note also that row operations on R matrices tend to be slower than column operations. This is because matrices are stored in memory by column. If you transpose your matrix, and perform the operation this way, the time to complete the operation on 100X1000 matrix drops to 0.14, half that of the first for loop, and 7 times faster than the lapply method.
I have an Nx3 array that contains N 3D points
a1 b1 c1
a2 b2 c2
....
aN bN cN
I want to calculate Euclidean distance in a NxN array that measures the Euclidean distance between each pair of 3D points. (i,j) in result array returns the distance between (ai,bi,ci) and (aj,bj,cj). Is it possible to write a code in matlab without loop ?
The challenge of your problem is to make a N*N matrix and the result should return in this matrix without using loops.
I overcome this challenge by giving suitable dimension to Bsxfun function. By default X and ReshapedX should have the same dimensions when we call bsxfun function. But if the size of the matrixes are not equal and one of them has a singleton (equal to 1) dimension, the matrix is virtually replicated along that dimension to match the other matrix. Therefore, it returns N*3*N matrix which provides subtraction of each 3D point from the others.
ReshapedX = permute(X,[3,2,1]);
DiffX = bsxfun(#minus,X,ReshapedX);
DistX =sqrt(sum(DiffX.^2,2));
D = squeeze(DistX);
Use pdist and squareform:
D = squareform( pdist(X, 'euclidean' ) );
For beginners, it can be a nice exercise to compute the distance matrix D using bsxfun (hover to see the solution).
elemDiff = bsxfun( #minus, permute(X,[ 1 3 2 ]), permute(X, [ 3 1 2 ]) );
D = sqrt( sum( elemDiff.^2, 3 ) );
To complete the comment of Divakar:
x = rand(10,3);
pdist2(x, x, 'euclidean')
This question is related to matlab: find the index of common values at the same entry from two arrays.
Suppose that I have an 1000 by 10000 matrix that contains value 0,1,and 2. Each row are treated as a sample. I want to calculate the pairwise distance between those samples according to the formula d = 1-1/(2p)sum(a/c+b/d) where a,b,c,d can treated as as the row vector of length 10000 according to some definition and p=10000. c and d are probabilities such that c+d=1.
An example of how to find the values of a,b,c,d: suppose we want to find d between sample i and bj, then I look at row i and j.
If kth entry of row i and j has value 2 and 2, then a=2,b=0,c=1,d=0 (I guess I will assign 0/0=0 in this case).
If kth entry of row i and j has value 2 and 1 or vice versa, then a=1,b=0,c=3/4,d=1/4.
The similar assignment will give to the case for 2,0(a=0,b=0,c=1/2,d=1/2),1,1(a=1,b=1,c=1/2,d=1/2),1,0(a=0,b=1,c=1/4,d=3/4),0,0(a=0,b=2,c=0,d=1).
The matlab code I have so far is using for loops for i and j, then find the cases above by using find, then create two arrays for a/c and b/d. This is extremely slow, is there a way that I can improve the efficiency?
Edit: the distance d is the formula given in this paper on page 13.
Provided those coefficients are fixed, then I think I've successfully vectorised the distance function. Figuring out the formulae was fun. I flipped things around a bit to minimise division, and since I wasn't aware of pdist until #horchler's comment, you get it wrapped in loops with the constants factored out:
% m is the data
[n p] = size(m, 1);
distance = zeros(n);
for ii=1:n
for jj=ii+1:n
a = min(m(ii,:), m(jj,:));
b = 2 - max(m(ii,:), m(jj,:));
c = 4 ./ (m(ii,:) + m(jj,:));
c(c == Inf) = 0;
d = 1 - c;
distance(ii,jj) = sum(a.*c + b.*d);
% distance(jj,ii) = distance(ii,jj); % optional for the full matrix
end
end
distance = 1 - (1 / (2 * p)) * distance;