Concentrate dataset arrays in matlab - arrays

Hi I have many arrays of different lengths now I want to create ONE long array (1D) out of all of them. Counterintuitively vertcat gives me a dimension error even though I do not see the point why the dimensions of my arrays should match.
Am I using vertcat wrong?

Your vectors are probably column vectors of different lengths (or matrices). Suppose A to D are the matrices you want to create a 1D-vector from. Try "flattening" them out using (:), and vertcat thereafter, like this:
long_1D_vector = [A(:); B(:); C(:); D(:)];
You may transpose it if you want a column vector instead:
long_1D_vector = [A(:); B(:); C(:); D(:)].';

Related

convert dictionary to abstract matrix in julia

I'm trying to do dimension reduction, and I got a:
d = Dict{Tuple{String, String}, Vector{Float64}}
Trying to apply umap on it.
While umap can only accepts abstractmatrix, so I do collect(d), but the dict is converted into vector, not array.
How do I convert it correctly to successfully apply umap?
You should be able to use
hcat(values(d)...)
using values you get a vector of vectors, the dictionary values. And hcat will concatenate them horizontally, however this function takes each vector as a different argument and therefore you need to splat this array into its elements, that is what the three dots ... do.
Check the documentation for splatting
As noted in the comments, a more efficient alternative is
reduce(hcat, values(d))
which achieves the same avoiding splatting.

How should I convert cell arrays data into single cell arrays separately in a "for" loop

I had several (500) files which I imported it into Matlab. There are 500 cells and each cell has data of size {5000 by 2} each. I want to save
them separately into arrays like M and N in a loop. like M(i) and N(i) so that i can do any kind of processing or fitting with the data within the loop.
k=1:500
value(k) = {mydata{k}(:).data};
IV{1,k}=value{1,k};
A(k)=cat(1, IV{1,k});
M(k)=A(:,1);
N(k)=A(:,2);
If I check it , "H = cat(1, IV{1,4});" concatenation command works perfectly for saving into single arrays. But it's not working into loop. I think the problem lies in the correct usage of cell array contents.
I like cell2mat in situations like this. https://www.mathworks.com/help/matlab/ref/cell2mat.html
I would turn your cell into an array, you might be able to avoid a for loop entirely.

Concatenate subcells through one dimension of a cell array without using loops in MATLAB

I have a cell array. Each cell contains a vector of variable length. For example:
example_cell_array=cellfun(#(x)x.*rand([length(x),1]),cellfun(#(x)ones(x,1), num2cell(ceil(10.*rand([7,4]))), 'UniformOutput', false), 'UniformOutput', false)
I need to concatenate the contents of the cells down through one dimension then perform an operation on each concatenated vector generating scalar for each column in my cell array (like sum() for example - the actual operation is complex, time consuming, and not naturally vectorisable - especially for diffent length vecotrs).
I can do this with loops easily (for my concatenated vector sum example) as follows:
[M N]=size(example_cell_array);
result=zeros(1,N);
cat_cell_array=cell(1,N);
for n=1:N
cat_cell_array{n}=[];
for m=1:M
cat_cell_array{n}=[cat_cell_array{n};example_cell_array{m,n}];
end
end
result=cell2mat(cellfun(#(x)sum(x), cat_cell_array, 'UniformOutput', false))
Unfortunately this is WAY too slow. (My cell array is 1Mx5 with vectors in each cell ranging in length from 100-200)
Is there a simple way to produce the concatenated cell array where the vectors contained in the cells have been concatenated down one dimension?
Something like:
dim=1;
cat_cell_array=(?concatcells?(dim,example_cell_array);
Edit:
Since so many people have been testing the solutions: Just FYI, the function I'm applying to each concatenated vector is circ_kappa(x) available from Circular Statistics Toolbox
Some approaches might suggest you to unpack the numeric data from example_cell_array using {..} and then after concatenation pack it back into bigger sized cells to form your cat_cell_array. Then, again you need to unpack numeric data from that concatenated cell array to perform your operation on each cell.
Now, in my view, this multiple unpacking and packing approaches won't be efficient ones if example_cell_array isn't one of your intended outputs. So, considering all these, let me suggest two approaches here.
Loopy approach
The first one is a for-loop code -
data1 = vertcat(example_cell_array{:}); %// extract all numeric data for once
starts = [1 sum(cellfun('length',example_cell_array),1)]; %// intervals lengths
idx = cumsum(starts); %// get indices to work on intervals basis
result = zeros(1,size(example_cell_array,2));
%// replace this with "result(size(example_cell_array,2))=0;" for performance
for k1 = 1:numel(idx)-1
result(k1) = sum(data1(idx(k1):idx(k1+1)-1));
end
So, you need to edit sum with your actual operation.
Almost-vectorized approach
If example_cell_array has a lot of columns, my second suggestion would be an almost vectorized approach, though it doesn't perform badly either with a small number of columns. Now this code uses cellfun at the first line to get the lengths for each cell in concatenated version. cellfun is basically a wrapper to a loop code, but this is not very expensive in terms of runtime and that's why I categorized this approach as an almost vectorized one.
The code would be -
lens = sum(cellfun('length',example_cell_array),1); %// intervals lengths
maxlens = max(lens);
numlens = numel(lens);
array1(maxlens,numlens)=0;
array1(bsxfun(#ge,lens,[1:maxlens]')) = vertcat(example_cell_array{:}); %//'
result = sum(array1,1);
The thing you need to do now, is to make your operation run on column basis with array1 using the mask created by the bsxfun implementation. Thus, if array1 is a M x 5 sized array, you need to select the valid elements from each column using the mask and then do the operation on those elements. Let me know if you need more info on the masking issue.
Hope one of these approaches would work for you!
Quick Tests: Using a 250000x5 sized example_cell_array, quick tests show that both these approaches for the sum operation perform very well and give about 400x speedup over the code in the question at my end.
For the concatenation itself, it sounds like you might want the functional form of cat:
for n=1:N
cat_cell_array{n} = cat(1, example_cell_array{:,n});
end
This will concatenate all the arrays in the cells in each column in the original input array.
You can define a function like this:
cellcat = #(C) arrayfun(#(k) cat(1, C{:, k}), 1:size(C,2), 'uni', 0);
And then just use
>> cellcat(example_cell_array)
ans =
[42x1 double] [53x1 double] [51x1 double] [47x1 double]
I think you are looking to generate cat_cell_array without using for loops. If so, you can do it as follows:
cat_cell_array=cellfun(#(x) cell2mat(x),num2cell(example_cell_array,1),'UniformOutput',false);
The above line can replace your entire for loop according to me. Then you can calculate your complex function over this cat_cell_array.
If only result is important to you and you do not want to store cat_cell_array, then you can do everything in a single line (not recommended for readability):
result=cell2mat(cellfun(#(x)sum(x), cellfun(#(x) cell2mat(x),num2cell(example_cell_array,1),'Uni',false), 'Uni', false));

signrank test in a three-dimensional array in MATLAB

I have a 60x60x35 array and would like to calculate the Wilcoxon signed rank test to calculate if the median for each element value across the third array dimension (i.e. with 35 values) is different from zero. Thus, I would like my results in two 60x60 arrays - with values of 0 and 1 depending on the test statistic, and in a separate array with corresponding p values.
The problem I am facing is specifying the command in a way that desired output would have appropriate dimensions and would be calculated across the appropriate dimension of the array.
Thanks for your help and all the best!
So one way to solve your problem is using a nested for-loop. Lets say your data is stored in data:
data=rand(60,60,35);
size_data=size(data);
p=zeros(size_data(1),size_data(2));
p(:,:)=NaN;
h=zeros(size_data(1),size_data(2));
h(:,:)=NaN;
for k=1:size_data(1)
for l=1:size_data(2)
tmp_data=data(k,l,:);
tmp_data=reshape(tmp_data,1,numel(tmp_data));
[p(k,l), h(k,l)]=signrank(tmp_data);
end
end
What I am doing is I preallocate the memory of p,h as a 60x60 matrix. Then I set them to NaN, so if you can easily see if sth went wrong (0 would be an acceptable result). Now I loop over all elements and store the actual data array in a new variable. signrank needs the data to be an array so I reshape it to two dimensions.
I guess you could skip those loops by using bsxfun

How to get mean, median, and other statistics over entire matrix, array or dataframe?

I know this is a basic question but for some strange reason I am unable to find an answer.
How should I apply basic statistical functions like mean, median, etc. over entire array, matrix or dataframe to get unique answers and not a vector over rows or columns
Since this comes up a fair bit, I'm going to treat this a little more comprehensively, to include the 'etc.' piece in addition to mean and median.
For a matrix, or array, as the others have stated, mean and median will return a single value. However, var will compute the covariances between the columns of a two dimensional matrix. Interestingly, for a multi-dimensional array, var goes back to returning a single value. sd on a 2-d matrix will work, but is deprecated, returning the standard deviation of the columns. Even better, mad returns a single value on a 2-d matrix and a multi-dimensional array. If you want a single value returned, the safest route is to coerce using as.vector() first. Having fun yet?
For a data.frame, mean is deprecated, but will again act on the columns separately. median requires that you coerce to a vector first, or unlist. As before, var will return the covariances, and sd is again deprecated but will return the standard deviation of the columns. mad requires that you coerce to a vector or unlist. In general for a data.frame if you want something to act on all values, you generally will just unlist it first.
Edit: Late breaking news(): In R 3.0.0 mean.data.frame is defunctified:
o mean() for data frames and sd() for data frames and matrices are
defunct.
By default, mean and median etc work over an entire array or matrix.
E.g.:
# array:
m <- array(runif(100),dim=c(10,10))
mean(m) # returns *one* value.
# matrix:
mean(as.matrix(m)) # same as before
For data frames, you can coerce them to a matrix first (the reason this is by default over columns is because a dataframe can have columns with strings in it, which you can't take the mean of):
# data frame
mdf <- as.data.frame(m)
# mean(mdf) returns column means
mean( as.matrix(mdf) ) # one value.
Just be careful that your dataframe has all numeric columns before coercing to matrix. Or exclude the non-numeric ones.
You can use library dplyr via install.packages('dplyr') and then
dataframe.mean <- dataframe %>%
summarise_all(mean) # replace for median

Resources