Getting value and index of array matlab - arrays

I have an array in Matlab, let say of (256, 256). Now i need to build a new array of dimensions (3, 256*256) containing in each row the value, and the index of the value in the original array. I.e:
test = [1,2,3;4,5,6;7,8,9]
test =
1 2 3
4 5 6
7 8 9
I need as result:
[1, 1, 1; 2, 1, 2; 3, 1, 3; 4, 2, 1; 5, 2, 2; and so on]
Any ideas?
Thanks in advance!

What you want is the output of meshgrid
[C,B]=meshgrid(1:size(test,1),1:size(test,2))
M=test;
M(:,:,2)=B;
M(:,:,3)=C;

here's what i came up with
test = [1,2,3;4,5,6;7,8,9]; % orig matrix
[m, n] = size(test); % example 1, breaks with value zero elems
o = find(test);
test1 = [o, reshape(test, m*n, 1), o]
Elapsed time is 0.004104 seconds.
% one liner from above
% (depending on data size might want to avoid dual find calls)
test2=[ find(test) reshape(test, size(test,1)*size(test,2), 1 ) find(test)]
Elapsed time is 0.008121 seconds.
[r, c, v] = find(test); % just another way to write above, still breaks on zeros
test3 = [r, v, c]
Elapsed time is 0.009516 seconds.
[i, j] =ind2sub([m n],[1:m*n]); % use ind2sub to build tables of indicies
% and reshape to build col vector
test4 = [i', reshape(test, m*n, 1), j']
Elapsed time is 0.011579 seconds.
test0 = [1,2,3;0,5,6;0,8,9]; % testing find with zeros.....breaks
% test5=[ find(test0) reshape(test0, size(test0,1)*size(test0,2), 1 ) find(test0)] % error in horzcat
[i, j] =ind2sub([m n],[1:m*n]); % testing ind2sub with zeros.... winner
test6 = [i', reshape(test0, m*n, 1), j']
Elapsed time is 0.014166 seconds.
Using meshgrid from above:
Elapsed time is 0.048007 seconds.

Related

Total numbers having frequency k in a given range

How to find total numbers having frequency=k in a particular range(l,r) in a given array. There are total 10^5 queries of format l,r and each query is built on the basis of previous query's answer. In particular, after each query we increment l by the result of the query, swapping l and r if l > r. Note that 0<=a[i]<=10^9. Total elements in array is n=10^5.
My Attempt:
n,k,q = map(int,input().split())
a = list(map(int,input().split()))
ans = 0
for _ in range(q):
l,r = map(int,input().split())
l+=ans
l%=n
r+=ans
r%=n
if l>r:
l,r = r,l
d = {}
for i in a[l:r+1]:
try:
d[i]+=1
except:
d[i] = 1
curr_ans = 0
for i in d.keys():
if d[i]==k:
curr_ans+=1
ans = curr_ans
print(ans)
Sample Input:
5 2 3
7 6 6 5 5
0 4
3 0
4 1
Sample Output:
2
1
1
If the number of different values in the array is not too large, you may consider storing arrays as long as the input array, one per unique value, counting the number of appearances of the value until each point. Then you just need to subtract the end values from the beginning values to find how many frequency matches are there:
def range_freq_queries(seq, k, queries):
n = len(seq)
c = freq_counts(seq)
result = [0] * len(queries)
offset = 0
for i, (l, r) in enumerate(queries):
result[i] = range_freq_matches(c, offset, l, r, k, n)
offset = result[i]
return result
def freq_counts(seq):
s = {v: i for i, v in enumerate(set(seq))}
counts = [None] * (len(seq) + 1)
counts[0] = [0] * len(s)
for i, v in enumerate(seq, 1):
counts[i] = list(counts[i - 1])
j = s[v]
counts[i][j] += 1
return counts
def range_freq_matches(counts, offset, start, end, k, n):
start, end = sorted(((start + offset) % n, (end + offset) % n))
num = 0
return sum(1 for cs, ce in zip(counts[start], counts[end + 1]) if ce - cs == k)
seq = [7, 6, 6, 5, 5]
k = 2
queries = [(0, 4), (3, 0), (4, 1)]
print(range_freq_queries(seq, k, queries))
# [2, 1, 1]
You can do it faster with NumPy, too. Since each result depends on the previous one, you will have to loop in any case, but you can use Numba to really accelerate things up:
import numpy as np
import numba as nb
def range_freq_queries_np(seq, k, queries):
seq = np.asarray(seq)
c = freq_counts_np(seq)
return _range_freq_queries_np_nb(seq, k, queries, c)
#nb.njit # This is not necessary but will make things faster
def _range_freq_queries_np_nb(seq, k, queries, c):
n = len(seq)
offset = np.int32(0)
out = np.empty(len(queries), dtype=np.int32)
for i, (l, r) in enumerate(queries):
l = (l + offset) % n
r = (r + offset) % n
l, r = min(l, r), max(l, r)
out[i] = np.sum(c[r + 1] - c[l] == k)
offset = out[i]
return out
def freq_counts_np(seq):
uniq = np.unique(seq)
seq_pad = np.concatenate([[uniq.max() + 1], seq])
comp = seq_pad[:, np.newaxis] == uniq
return np.cumsum(comp, axis=0)
seq = np.array([7, 6, 6, 5, 5])
k = 2
queries = [(0, 4), (3, 0), (4, 1)]
print(range_freq_queries_np(seq, k, queries))
# [2 1 2]
Let's compare it with the original algorithm:
from collections import Counter
def range_freq_queries_orig(seq, k, queries):
n = len(seq)
ans = 0
counter = Counter()
out = [0] * len(queries)
for i, (l, r) in enumerate(queries):
l += ans
l %= n
r += ans
r %= n
if l > r:
l, r = r, l
counter.clear()
counter.update(seq[l:r+1])
ans = sum(1 for v in counter.values() if v == k)
out[i] = ans
return out
Here is a quick test and timing:
import random
import numpy
# Make random input
random.seed(0)
seq = random.choices(range(1000), k=5000)
queries = [(random.choice(range(len(seq))), random.choice(range(len(seq))))
for _ in range(20000)]
k = 20
# Input as array for NumPy version
seq_arr = np.asarray(seq)
# Check all functions return the same result
res1 = range_freq_queries_orig(seq, k, queries)
res2 = range_freq_queries(seq, k, queries)
print(all(r1 == r2 for r1, r2 in zip(res1, res2)))
# True
res3 = range_freq_queries_np(seq_arr, k, queries)
print(all(r1 == r3 for r1, r3 in zip(res1, res3)))
# True
# Timings
%timeit range_freq_queries_orig(seq, k, queries)
# 3.07 s ± 1.11 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit range_freq_queries(seq, k, queries)
# 1.1 s ± 307 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit range_freq_queries_np(seq_arr, k, queries)
# 265 ms ± 726 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
Obviously the effectiveness of this depends on the characteristics of the data. In particular, if there are fewer repeated values the time and memory cost to construct the counts table will approach O(n2).
Let's say the input array is A, |A|=n. I'm going to assume that the number of distinct elements in A is much smaller than n.
We can divide A into sqrt(n) segments each of size sqrt(n). For each of these segments, we can calculate a map from element to count. Building these maps takes O(n) time.
With that preprocessing done, we can answer each query by adding together all the maps wholly contained in (l,r), of which there are at most sqrt(n), then adding any extra elements (or going one segment over and subtracting), also sqrt(n).
If there are k distinct elements, this takes O(sqrt(n) * k) so in the worst case O(n) if in fact every element of A is distinct.
You can keep track of the elements that have the desired count while combining the hashes and extra elements.

Sum up vector values till threshold, then start again

I have a vector a = [1 3 4 2 1 5 6 3 2]. Now I want to create a new vector 'b' with the cumsum of a, but after reaching a threshold, let's say 5, cumsum should reset and start again till it reaches the threshold again, so the new vector should look like this:
b = [1 4 4 2 3 5 6 3 5]
Any ideas?
You could build a sparse matrix that, when multiplied by the original vector, returns the cumulative sums. I haven't timed this solution versus others, but I strongly suspect this will be the fastest for large arrays of a.
% Original data
a = [1 3 4 2 1 5 6 3 2];
% Threshold
th = 5;
% Cumulative sum corrected by threshold
b = cumsum(a)/th;
% Group indices to be summed by checking for equality,
% rounded down, between each cumsum value and its next value. We add one to
% prevent NaNs from occuring in the next step.
c = cumsum(floor(b) ~= floor([0,b(1:end-1)]))+1;
% Build the sparse matrix, remove all values that are in the upper
% triangle.
S = tril(sparse(c.'./c == 1));
% In case you use matlab 2016a or older:
% S = tril(sparse(bsxfun(#rdivide,c.',c) == 1));
% Matrix multiplication to create o.
o = S*a.';
By normalizing the arguments of cumsum with the threshold and flooring you can get grouping indizes for accumarray, which then can do the cumsumming groupwise:
t = 5;
a = [1 3 4 2 1 5 6 3 2];
%// cumulative sum of normalized vector a
n = cumsum(a/t);
%// subs for accumarray
subs = floor( n ) + 1;
%// cumsum of every group
aout = accumarray( subs(:), (1:numel(subs)).', [], #(x) {cumsum(a(x))});
%// gather results;
b = [aout{:}]
One way is to use a loop. You create the first cumulative sum cs, and then as long as elements in cs are larger than your threshold th, you replace them with elements from the cumulative sum on the rest of the elements in a.
Because some elements in a might be larger than th, this loop will be infinite unless we also eliminate these elements too.
Here is a simple solution with a while loop:
a = [1 3 4 2 1 5 6 3 2];
th = 5;
cs = cumsum(a);
while any(cs>th & cs~=a) % if 'cs' has values larger that 'th',
% and there are any values smaller than th left in 'a'
% sum all the values in 'a' that are after 'cs' reached 'th',
% excluding values that are larger then 'th'
cs(cs>th & cs~=a) = cumsum(a(cs>th & cs~=a));
end
Calculate the cumulative sum and replace the indices value obeying your condition.
a = [1 3 4 2 1 5 6 3 2] ;
b = [1 4 4 2 3 5 6 3 5] ;
iwant = a ;
a_sum = cumsum(a) ;
iwant(a_sum<5) = a_sum(a_sum<5) ;

Indexing into N-D matrix

I have a matrix M (4*2) with values:
[1 0;
0 0;
1 1;
0 1]
And an array X = [0.3 0.4 0.5 0.2];
All column entries of M are binary (0/1). What I want is a corresponding row value mapped into a ND-array of [2,2] called Z. Each dimension here indicates 0/1, by having it in the first row or second row. X(1) needs to go to Z(2,1) and X(2) needs to go to Z(1,1) and so on..
Z will look like this:
Z =
[0.4 0.2;
0.3 0.5];
Currently I am looping over this, but it is really expensive to do so. Please note that this is a minimal example - I need to do this for a 128*7 matrix into a 7D array.
Any suggestions on how to speed up this process?
You could try using accumarray (not sure if it's faster):
>> Z = accumarray(M + 1, X, [2 2])
Z =
0.4000 0.2000
0.2000 0.5000
How about
D=[1 0;0 0;1 1;0 1]; X = [0.3 0.4 0.5 0.2];
Z(sub2ind([2 2], D(:, 1) + 1, D(:, 2) + 1)) = X;
Z = reshape(Z, 2, 2);
EDIT
Most of the overhead of sub2ind is, unfortunately, error checking. If you are comfortable that all your D values are in range, you can effectively inline the sub2ind operation. Here's an example:
ndx = M(:, 1) + 1;
ndx = ndx + M(:, 2) * 2;
Z2=[];
Z2(ndx) = X;
Z2 = reshape(Z2, 2, 2);
Using this code and the timing test in #johnnyfuego's answer, I get
Elapsed time is 0.154196 seconds. <--- johnny's
Elapsed time is 0.288680 seconds. <--- mine
Elapsed time is 0.143874 seconds.
So, better, but still not beating the other two. However, note that there is a break-even point here. If I change the setup code in the time test to
M = randi(2, 1000, 2) - 1;
X = rand(1, 1000);
That is, I bump the number of values to write up from 4 to 1000, then the time trial results in
Elapsed time is 3.650833 seconds. <--- johnny's
Elapsed time is 0.607361 seconds. <--- mine
Elapsed time is 0.872595 seconds.
EDIT #2
Here's how you would unroll a multidimensional sub2ind:
siz = [2 2 2 2 2 2 2];
offsets = cumprod(siz);
ndx = M(:, 1) + 1;
ndx = ndx + M(:, 2) * offsets(1);
ndx = ndx + M(:, 3) * offsets(2);
ndx = ndx + M(:, 4) * offsets(3);
ndx = ndx + M(:, 5) * offsets(4);
ndx = ndx + M(:, 6) * offsets(5);
ndx = ndx + M(:, 7) * offsets(6);
Z2=[];
Z2(ndx) = X;
Z2 = reshape(Z2, [siz]);
Using that with your updated time test I get:
Elapsed time is 43.754363 seconds.
Elapsed time is 1.045980 seconds.
Elapsed time is 0.689487 seconds.
So, still better than looping, but it looks like in this multidimensional case accumarray (Rafael's answer) wins. I'd consider awarding him the "accepted answer" points.
Thank you #rafael-monteiro & #SCFRench.
My original procedure was faster. I have pasted a benchmarking script below.
M=[1 0; 0 0; 1 1; 0 1];
X = [0.3 0.4 0.5 0.2];
nrep=50000;
%% My own code
tic
for A=1:nrep;
MN=M+1; % I know I can do this outside of the loop, but comparison with this seems more fair.
Z=zeros(size(X,2)/2,size(X,2)/2); % without pre-allocation it is twice as fast, I guess finding the size and the computation does not help here!
for I=1:4
Z1(MN(I,1),MN(I,2))=X(I);
end
end
toc
%% SCFrench code
tic
for A=1:nrep;
Z2(sub2ind([2 2], M(:, 1) + 1, M(:, 2) + 1)) = X;
Z2 = reshape(Z2, 2, 2);
end
toc
%% Rafael code
tic
for A=1:nrep;
Z3 = accumarray(M + 1, X, [2 2]);
end
toc
Elapsed time is 0.115488 seconds. % mine
Elapsed time is 1.082505 seconds. % SCFrench
Elapsed time is 0.282693 seconds. % rafael
EDIT:
Using larger data, the first implementation seems far slower.
alts=7;
M = dec2bin(0:2^alts-1)-'0';
X = rand(size(M,1),1);
nrep=50000;
tic
for A=1:nrep;
MN=M+1;
for I=1:128
Z1(MN(I,1),MN(I,2),MN(I,3),MN(I,4),MN(I,5),MN(I,6),MN(I,7))=X(I);
end
end
toc
tic
for A=1:nrep;
Z2(sub2ind([2 2 2 2 2 2 2], M(:, 1) + 1, M(:, 2) + 1, M(:, 3) + 1, M(:, 4) + 1, M(:, 5) + 1, M(:, 6) + 1, M(:, 7) + 1)) = X;
Z2 = reshape(Z2, [2 2 2 2 2 2 2]);
end
toc
tic
for A=1:nrep;
Z3 = accumarray(M + 1, X, [2 2 2 2 2 2 2]);
end
toc
Elapsed time is 33.390247 seconds. % Mine
Elapsed time is 4.280668 seconds. % SCFrench
Elapsed time is 0.629584 seconds. % Rafael

How to quickly get the array of multiplicities

What is the fastest way of taking an array A and outputing both unique(A) [i.e. the set of unique array elements of A] as well as the multiplicity array which takes in its i-th place the i-th multiplicity of the i-th entry of unique(A) in A.
That's a mouthful, so here's an example. Given A=[1 1 3 1 4 5 3], I want:
unique(A)=[1 3 4 5]
mult = [3 2 1 1]
This can be done with a tedious for loop, but would like to know if there is a way to exploit the array nature of MATLAB.
uA = unique(A);
mult = histc(A,uA);
Alternatively:
uA = unique(A);
mult = sum(bsxfun(#eq, uA(:).', A(:)));
Benchmarking
N = 100;
A = randi(N,1,2*N); %// size 1 x 2*N
%// Luis Mendo, first approach
tic
for iter = 1:1e3;
uA = unique(A);
mult = histc(A,uA);
end
toc
%// Luis Mendo, second approach
tic
for iter = 1:1e3;
uA = unique(A);
mult = sum(bsxfun(#eq, uA(:).', A(:)));
end
toc
%'// chappjc
tic
for iter = 1:1e3;
[uA,~,ic] = unique(A); % uA(ic) == A
mult= accumarray(ic.',1);
end
toc
Results with N = 100:
Elapsed time is 0.096206 seconds.
Elapsed time is 0.235686 seconds.
Elapsed time is 0.154150 seconds.
Results with N = 1000:
Elapsed time is 0.481456 seconds.
Elapsed time is 4.534572 seconds.
Elapsed time is 0.550606 seconds.
[uA,~,ic] = unique(A); % uA(ic) == A
mult = accumarray(ic.',1);
accumarray is very fast. Unfortunately, unique gets slow with 3 outputs.
Late addition:
uA = unique(A);
mult = nonzeros(accumarray(A(:),1,[],#sum,0,true))
S = sparse(A,1,1);
[uA,~,mult] = find(S);
I've found this elegant solution in an old Newsgroup thread.
Testing with the benchmark of Luis Mendo for N = 1000 :
Elapsed time is 0.228704 seconds. % histc
Elapsed time is 1.838388 seconds. % bsxfun
Elapsed time is 0.128791 seconds. % sparse
(On my machine, accumarray results in Error: Maximum variable size allowed by the program is exceeded.)

mean of parts of an array in octave

I have two arrays. One is a list of lengths within the other. For example
zarray = [1 2 3 4 5 6 7 8 9 10]
and
lengths = [1 3 2 1 3]
I want to average (mean) over parts the first array with lengths given by the second. For this example, resulting in:
[mean([1]),mean([2,3,4]),mean([5,6]),mean([7]),mean([8,9,10])]
I am trying to avoid looping, for the sake of speed. I tried using mat2cell and cellfun as follows
zcell = mat2cell(zarray,[1],lengths);
zcellsum = cellfun('mean',zcell);
But the cellfun part is very slow. Is there a way to do this without looping or cellfun?
Here is a fully vectorized solution (no explicit for-loops, or hidden loops with ARRAYFUN, CELLFUN, ..). The idea is to use the extremely fast ACCUMARRAY function:
%# data
zarray = [1 2 3 4 5 6 7 8 9 10];
lengths = [1 3 2 1 3];
%# generate subscripts: 1 2 2 2 3 3 4 5 5 5
endLocs = cumsum(lengths(:));
subs = zeros(endLocs(end),1);
subs([1;endLocs(1:end-1)+1]) = 1;
subs = cumsum(subs);
%# mean of each part
means = accumarray(subs, zarray) ./ lengths(:)
The result in this case:
means =
1
3
5.5
7
9
Speed test:
Consider the following comparison of the different methods. I am using the TIMEIT function by Steve Eddins:
function [t,v] = testMeans()
%# generate test data
[arr,len] = genData();
%# define functions
f1 = #() func1(arr,len);
f2 = #() func2(arr,len);
f3 = #() func3(arr,len);
f4 = #() func4(arr,len);
%# timeit
t(1) = timeit( f1 );
t(2) = timeit( f2 );
t(3) = timeit( f3 );
t(4) = timeit( f4 );
%# return results to check their validity
v{1} = f1();
v{2} = f2();
v{3} = f3();
v{4} = f4();
end
function [arr,len] = genData()
%#arr = [1 2 3 4 5 6 7 8 9 10];
%#len = [1 3 2 1 3];
numArr = 10000; %# number of elements in array
numParts = 500; %# number of parts/regions
arr = rand(1,numArr);
len = zeros(1,numParts);
len(1:end-1) = diff(sort( randperm(numArr,numParts) ));
len(end) = numArr - sum(len);
end
function m = func1(arr, len)
%# #Drodbar: for-loop
idx = 1;
N = length(len);
m = zeros(1,N);
for i=1:N
m(i) = mean( arr(idx+(0:len(i)-1)) );
idx = idx + len(i);
end
end
function m = func2(arr, len)
%# #user1073959: MAT2CELL+CELLFUN
m = cellfun(#mean, mat2cell(arr, 1, len));
end
function m = func3(arr, len)
%# #Drodbar: ARRAYFUN+CELLFUN
idx = arrayfun(#(a,b) a-(0:b-1), cumsum(len), len, 'UniformOutput',false);
m = cellfun(#(a) mean(arr(a)), idx);
end
function m = func4(arr, len)
%# #Amro: ACCUMARRAY
endLocs = cumsum(len(:));
subs = zeros(endLocs(end),1);
subs([1;endLocs(1:end-1)+1]) = 1;
subs = cumsum(subs);
m = accumarray(subs, arr) ./ len(:);
if isrow(len)
m = m';
end
end
Below are the timings. Tests were performed on a WinXP 32-bit machine with MATLAB R2012a. My method is an order of magnitude faster than all other methods. For-loop is second best.
>> [t,v] = testMeans();
>> t
t =
0.013098 0.013074 0.022407 0.00031807
| | | \_________ #Amro: ACCUMARRAY (!)
| | \___________________ #Drodbar: ARRAYFUN+CELLFUN
| \______________________________ #user1073959: MAT2CELL+CELLFUN
\__________________________________________ #Drodbar: FOR-loop
Furthermore all results are correct and equal -- differences are in the order of eps the machine precision (caused by different ways of accumulating round-off errors), therefore considered rubbish and simply ignored:
%#assert( isequal(v{:}) )
>> maxErr = max(max( diff(vertcat(v{:})) ))
maxErr =
3.3307e-16
Here is a solution using arrayfun and cellfun
zarray = [1 2 3 4 5 6 7 8 9 10];
lengths = [1 3 2 1 3];
% Generate the indexes for the elements contained within each length specified
% subset. idx would be {[1], [4, 3, 2], [6, 5], [7], [10, 9, 8]} in this case
idx = arrayfun(#(a,b) a-(0:b-1), cumsum(lengths), lengths,'UniformOutput',false);
means = cellfun( #(a) mean(zarray(a)), idx);
Your desired output result:
means =
1.0000 3.0000 5.5000 7.0000 9.0000
Following #tmpearce comment I did a quick time performance comparison between above's solution, from which I create a function called subsetMeans1
function means = subsetMeans1( zarray, lengths)
% Generate the indexes for the elements contained within each length specified
% subset. idx would be {[1], [4, 3, 2], [6, 5], [7], [10, 9, 8]} in this case
idx = arrayfun(#(a,b) a-(0:b-1), cumsum(lengths), lengths,'UniformOutput',false);
means = cellfun( #(a) mean(zarray(a)), idx);
and a simple for loop alternative, function subsetMeans2.
function means = subsetMeans2( zarray, lengths)
% Method based on single loop
idx = 1;
N = length(lengths);
means = zeros( 1, N);
for i = 1:N
means(i) = mean( zarray(idx+(0:lengths(i)-1)) );
idx = idx+lengths(i);
end
Using the next test scrip, based on TIMEIT, that allows checking performance varying the number of elements on the input vector and sizes of elements per subset:
% Generate some data for the performance test
% Total of elements on the vector to test
nVec = 100000;
% Max of elements per subset
nSubset = 5;
% Data generation aux variables
lenghtsGen = randi( nSubset, 1, nVec);
accumLen = cumsum(lenghtsGen);
maxIdx = find( accumLen < nVec, 1, 'last' );
% % Original test data
% zarray = [1 2 3 4 5 6 7 8 9 10];
% lengths = [1 3 2 1 3];
% Vector to test
zarray = 1:nVec;
lengths = [ lenghtsGen(1:maxIdx) nVec-accumLen(maxIdx)] ;
% Double check that nVec is will be the max index
assert ( sum(lengths) == nVec)
t1(1) = timeit(#() subsetMeans1( zarray, lengths));
t1(2) = timeit(#() subsetMeans2( zarray, lengths));
fprintf('Time spent subsetMeans1: %f\n',t1(1));
fprintf('Time spent subsetMeans2: %f\n',t1(2));
It turns out that the non-vectorised version without arrayfun and cellfun is faster, presumably due to the extra overhead of those functions
Time spent subsetMeans1: 2.082457
Time spent subsetMeans2: 1.278473

Resources