(* I don't know programming in Matlab. It is just a general question about Matlab language. *)
In Excel, we can store a formula in a cell. For instance, if A2 contains a formula = A1+10, the re-evaluation of A2 returns 30 when the value of A1 is 20.
My question is, is there a similar mechanism in matlab? That said, can we specify a formula in a element of an array in Matlab, so that we can re-evaluate the array later?
Edit 1:
Following the comment of #rayryeng I try to make an example to illustrate the concept... Actually, this is exactly what spreadsheet languages such as Excel can do.
So my question is, is there a mechanism that permits the following in Matlab? (Note that the following syntax is just symbolic)
>> B = [1 2; B{1,1}+2 4] // store some values and a formula in the array
B =
1 2
3 4
>> B{1,1} = 10 // change the value of one cell
B =
10 2
3 4
>> EVAL(B) // there is a re-evaluation command to re-calculate all the cells
ans =
10 2
13 4
Hopefully I'm understanding what you want correctly, but the answer is indeed yes. You can store "formulas" in a cell array where each element is a handle or an anonymous function.
Perhaps you mean something like this:
formulae = {#(x) x+10, #sin, #cos, #(x) x / 3};
The syntax # denotes a function handle and the (x) denote that this is an anonymous function with the input variable x. The first cell element provides a function that adds 10 to every value that goes into it, the second and third parameters are handles to sin and cos, so these act like those trigonometric functions. The last handle divides every value that goes into it by 3.
To demonstrate, let's create a small array, then go through each formula and apply each of them to the small array:
>> formulae = {#(x) x+10, #sin, #cos, #(x) x / 3};
>> A = [1 2; 3 4]
A =
1 2
3 4
>> formulae{1}(A)
ans =
11 12
13 14
>> formulae{2}(A)
ans =
0.8415 0.9093
0.1411 -0.7568
>> formulae{3}(A)
ans =
0.5403 -0.4161
-0.9900 -0.6536
>> formulae{4}(A)
ans =
0.3333 0.6667
1.0000 1.3333
We first create the formulae, then create a small 2 x 2 matrix of [1 2; 3 4]. After, we access each formula's cell, then put in the input A into the function and we get what you see.
However, when you're starting out, start with actually declaring functions in function scripts.... don't use this kind of style of programming for practical applications. It makes your code less readable. For example, doing sin(A) is much more readable than formula{2}(A). People reading your code have to remember what position in the array corresponds to what formula you are applying to each element in the input.
Related
For my project I need to save vectors in a matrix, thus creating a multidimensional array (3D-Matrix).
Now I'm wondering on how to access my vectors.
Lets say I have a lot of vectors stored in an array c. I could access all vectors with c(i,:).
I can also perform vector operations and use buit in fuctions like norm(c(1,:))and it gives me the absolute value of the vector. Everythings fine
Now if I store a vector v in a 2D-matrix M, i can still access every element of the vector, but M(i,j,:) doesnt give me the output [vx;vy;vz]I'm looking for. Instead matlab gives three outputs resulting in problems using the built in vector operations.
Is there any way around this?
Or do I have to implement my own functions to operate on a 3d-matrix?
Edit:
If z is a vector, the output is: z = [zx zy zz]
If this vetor is stored in a 2x2x3 matrix M(2,2,3), lets say in M(1,1), the output when accessing the vector by M(1,1,:)isnt [zx zy zz].
Instead the output is: M(:,:,1) = zx M(:,:,2) = zy M(:,:,3) = zz
Thanks for pointing out to change the direction the vector is stored in the matrix.
M(i,j,:) returns a 1×1×3 array, which Matlab understand is not exactly a vector v as you want, but a three-dimensional array, which happens to have only one element with respect to the first and second indices.
You can easily remove the dimensions of length 1 with the built-in function squeeze:
A = zeros(1,1,3);
A(:,:,1:3) = [1 2 3]
A =
A(:,:,1) =
1
A(:,:,2) =
2
A(:,:,3) =
3
B = squeeze(A)
B = 3×1
1
2
3
For an nD array, it would be nice to be able to auto squeeze to remove singleton dimensions. Is there a way to do this that I don't know about? This would be especially useful for aggregate functions (e.g. sum, mean, etc) where you always expect a result with fewer dimensions.
Here's a simple example:
>> A = ones(3,3,3);
>> B = mean(A);
>> size(B)
ans =
1 3 3
>> squeeze(B)
ans =
1 1 1
1 1 1
1 1 1
It would be nice if Matlab/Octave would automatically do the squeezing for me. Or if there was a way to turn that option on (something similar to hold on for plots).
As far as I know, Matlab does not have that. And I don't think it would be a good idea. Consider a modified version of your example:
>> A = ones(3,1,1,3);
>> B = mean(A);
>> size(B)
ans =
1 1 1 3
What should "auto-squeeze" do here? Reduce B to size [1 1 3] or to [1 3]?
You could argue that it should remove the same dimension that mean has turned into a singleton. But then it would have to be done within the mean function, perhaps with an optional input argument. Once you get the function output, there is no information how it was obtained.
Or you could argue that it should remove all singleton dimensions, like squeeze (more or less) does. But then it would remove dimensions that were already singleton in the function input, which is probably unwanted.
If you ask me, having a second input in squeeze specifiyng which (singleton) dimensions to remove would be a nice addition (in the same vein as you can use mean(A, 1) to force the operation to be applied along the first dimension even if A happens to be a row vector).
I agree with Luis and Cris, but I would add the following.
Both Matlab and Octave do automatically squeeze extra dimensions, in a very particular scenario: any dimensions at the end that have been reduced to singletons, are automatically squeezed out.
E.g.
A = ones([1,2,3,4]);
B = mean(A, 4);
size(B)
% ans = 1 2 3
Note, how the answer is [1,2,3], and not [1,2,3,1]. This is in contrast to languages like python, for instance, where a size of (1,1) is very different to a size of (1,).
Therefore, with regard to your questions, one way to use this to your advantage could be to ensure that the dimension that is to be reduced is always found at the end, and thus automatically simplified.
This becomes even more useful when you realise that:
size(A(:)) % ans = 24 1 (i.e. 24)
size(A(:,:)) % ans = 1 24
size(A(:,:,:)) % ans = 1 2 12
size(A(:,:,:,:)) % ans = 1 2 3 4
Meaning, if you order your dimensions hierarchically you can ensure that any operations that need to take place over the higher dimensions, can a) be vectorised easily, and b) give a natural result, without the need to waste time squeezing or permuting the resulting dimensions.
I would like to compute the product of the next n adjacent elements of a matrix. The number n of elements to be multiplied should be given in function's input.
For example for this input I should compute the product of every 3 consecutive elements, starting from the first.
[p, ind] = max_product([1 2 2 1 3 1],3);
This gives [1*2*2, 2*2*1, 2*1*3, 1*3*1] = [4,4,6,3].
Is there any practical way to do it? Now I do this using:
for ii = 1:(length(v)-2)
p = prod(v(ii:ii+n-1));
end
where v is the input vector and n is the number of elements to be multiplied.
in this example n=3 but can take any positive integer value.
Depending whether n is odd or even or length(v) is odd or even, I get sometimes right answers but sometimes an error.
For example for arguments:
v = [1.35912281237829 -0.958120385352704 -0.553335935098461 1.44601450110386 1.43760259196739 0.0266423803393867 0.417039432979809 1.14033971399183 -0.418125096873537 -1.99362640306847 -0.589833539347417 -0.218969651537063 1.49863539349242 0.338844452879616 1.34169199365703 0.181185490389383 0.102817336496793 0.104835620599133 -2.70026800170358 1.46129128974515 0.64413523430416 0.921962619821458 0.568712984110933]
n = 7
I get the error:
Index exceeds matrix dimensions.
Error in max_product (line 6)
p = prod(v(ii:ii+n-1));
Is there any correct general way to do it?
Based on the solution in Fast numpy rolling_product, I'd like to suggest a MATLAB version of it, which leverages the movsum function introduced in R2016a.
The mathematical reasoning is that a product of numbers is equal to the exponent of the sum of their logarithms:
A possible MATLAB implementation of the above may look like this:
function P = movprod(vec,window_sz)
P = exp(movsum(log(vec),[0 window_sz-1],'Endpoints','discard'));
if isreal(vec) % Ensures correct outputs when the input contains negative and/or
P = real(P); % complex entries.
end
end
Several notes:
I haven't benchmarked this solution, and do not know how it compares in terms of performance to the other suggestions.
It should work correctly with vectors containing zero and/or negative and/or complex elements.
It can be easily expanded to accept a dimension to operate along (for array inputs), and any other customization afforded by movsum.
The 1st input is assumed to be either a double or a complex double row vector.
Outputs may require rounding.
Update
Inspired by the nicely thought answer of Dev-iL comes this handy solution, which does not require Matlab R2016a or above:
out = real( exp(conv(log(a),ones(1,n),'valid')) )
The basic idea is to transform the multiplication to a sum and a moving average can be used, which in turn can be realised by convolution.
Old answers
This is one way using gallery to get a circulant matrix and indexing the relevant part of the resulting matrix before multiplying the elements:
a = [1 2 2 1 3 1]
n = 3
%// circulant matrix
tmp = gallery('circul', a(:))
%// product of relevant parts of matrix
out = prod(tmp(end-n+1:-1:1, end-n+1:end), 2)
out =
4
4
6
3
More memory efficient alternative in case there are no zeros in the input:
a = [10 9 8 7 6 5 4 3 2 1]
n = 2
%// cumulative product
x = [1 cumprod(a)]
%// shifted by n and divided by itself
y = circshift( x,[0 -n] )./x
%// remove last elements
out = y(1:end-n)
out =
90 72 56 42 30 20 12 6 2
Your approach is correct. You should just change the for loop to for ii = 1:(length(v)-n+1) and then it will work fine.
If you are not going to deal with large inputs, another approach is using gallery as explained in #thewaywewalk's answer.
I think the problem may be based on your indexing. The line that states for ii = 1:(length(v)-2) does not provide the correct range of ii.
Try this:
function out = max_product(in,size)
size = size-1; % this is because we add size to i later
out = zeros(length(in),1) % assuming that this is a column vector
for i = 1:length(in)-size
out(i) = prod(in(i:i+size));
end
Your code works when restated like so:
for ii = 1:(length(v)-(n-1))
p = prod(v(ii:ii+(n-1)));
end
That should take care of the indexing problem.
using bsxfun you create a matrix each row of it contains consecutive 3 elements then take prod of 2nd dimension of the matrix. I think this is most efficient way:
max_product = #(v, n) prod(v(bsxfun(#plus, (1 : n), (0 : numel(v)-n)')), 2);
p = max_product([1 2 2 1 3 1],3)
Update:
some other solutions updated, and some such as #Dev-iL 's answer outperform others, I can suggest fftconv that in Octave outperforms conv
If you can upgrade to R2017a, you can use the new movprod function to compute a windowed product.
i am looking for a way that i can add up elements in an array such that the first element of the first array is added to every element in the second array, then the second element in the first array is added to all every element in the second array and so on. The final vector will be length(a)*length(b) long
for example...
a=[1,2,3,4] b=[5,6,7]
answer =
[(1+5),(1+6),(1+7),(2+5),(2+6),(2+7),(3+5),(3+6),(3+7),(4+5),(4+6),(4+7)]
=[6,7,8,7,8,9,8,9,10,9,10,11]
Read up on bsxfun. It's very useful for this kind of things (and usually faster than arrayfun or for loops):
result = bsxfun(#plus, a(:).', b(:)); %'// matrix of size numel(b) x numel(a)
result = result(:).'; %'// linearize to a vector
Or, a little more freak: kron does what you want with products instead of sums. So:
result = log(kron(exp(a),exp(b)));
My first thought is to do this with arrayfun using an anonymous function that adds each scalar element of a to the full array in b. Then since you get a cell array result you can expand that cell array into the array you are looking for:
>> a=[1,2,3,4], b=[5,6,7]
>> result = arrayfun(#(x) x+b, a,'UniformOutput',false);
>> result = [result{:}]
result =
6 7 8 7 8 9 8 9 10 9 10 11
Use meshgrid to create matrices of a and b and use matrix addition to compute a+b
a=[1,2,3,4], b=[5,6,7]
[A_matrix,B_matrix] = meshgrid(a,b)
result = A_matrix + B_matrix
result = result(:)'
Take this simple example:
a = [1 2i];
x = zeros(1,length(a));
for n=1:length(a)
x(n) = isreal(a(n));
end
In an attempt to vectorize the code, I tried:
y = arrayfun(#isreal,a);
But the results are not the same:
x =
1 0
y =
0 0
What am I doing wrong?
This certainly appears to be a bug, but here's a workaround:
>> y = arrayfun(#(x) isreal(x(1)),a)
ans =
1 0
Why does this work? I'm not totally sure, but it appears that when you perform an indexing operation on the variable before calling ISREAL it removes the "complex" attribute from the array element if the imaginary component is zero. Try this in the Command Window:
>> a = [1 2i]; %# A complex array
>> b = a(1); %# Indexing element 1 removes the complex attribute...
>> c = complex(a(1)); %# ...but we can put that attribute back
>> whos
Name Size Bytes Class Attributes
a 1x2 32 double complex
b 1x1 8 double %# Not complex
c 1x1 16 double complex %# Still complex
Apparently, ARRAYFUN must internally maintain the "complex" attribute of the array elements it passes to ISREAL, thus treating them all as being complex numbers even if the imaginary component is zero.
It might help to know that MATLAB stores the real/complex parts of a matrix separately. Try the following:
>> format debug
>> a = [1 2i];
>> disp(a)
Structure address = 17bbc5b0
m = 1
n = 2
pr = 1c6f18a0
pi = 1c6f0420
1.0000 0 + 2.0000i
where pr is a pointer to the memory block containing the real part of all values, and pi pointer to the complex part of all values in the matrix. Since all elements are stored together, then in this case they all have a complex part.
Now compare these two approaches:
>> arrayfun(#(x)disp(x),a)
Structure address = 17bbcff8
m = 1
n = 1
pr = 1bb8a8d0
pi = 1bb874d0
1
Structure address = 17c19aa8
m = 1
n = 1
pr = 1c17b5d0
pi = 1c176470
0 + 2.0000i
versus
>> for n=1:2, disp(a(n)), end
Structure address = 17bbc930
m = 1
n = 1
pr = 1bb874d0
pi = 0
1
Structure address = 17bbd180
m = 1
n = 1
pr = 1bb874d0
pi = 1bb88310
0 + 2.0000i
So it seems that when you access a(1) in the for loop, the value returned (in the ans variable) has a zero complex-part (null pi), thus is considered real.
One the other hand, ARRAYFUN seems to be directly accessing the values of the matrix (without returning them in ANS variable), thus it has access to both pr and pi pointers which are not null, thus are all elements are considered non-real.
Please keep in mind this just my interpretation, and I could be mistaken...
Answering really late on this one... The MATLAB function ISREAL operates in a really rather counter-intuitive way for many purposes. It tells you if a given array taken as a whole has no complex part at all - it tells you about the storage, it doesn't really tell you anything about the values in the array. It's a bit like the ISSPARSE function in that regard. So, for example
isreal(complex(1)) % returns FALSE
What you'll find in MATLAB is that certain operations automatically trim any all-zero imaginary parts. So, for example
x = complex(1);
isreal(x); % FALSE, we just forced there to be an imaginary part
isreal(x(1)); % TRUE - indexing realised it could drop the zero imaginary part
isreal(x(:)); % FALSE - "(:)" indexing is just a reshape, not real indexing
In short, MATLAB really needs a function which answers the question "does this value have zero imaginary part", in an elementwise way on an array.