Matlab parfor slice correctly - arrays

I have two nested loops which I want to parallelize.
n=100;
x=rand(1,n);
m=5;
xx=rand(1,m);
r = zeros(1,m);
for i=1:n
q = ones(1,m);
for j=1:n
q = q .* (xx-x(j))/(x(i)-x(j));
end
r = r + q;
end
In order to prepare this function for palatalization, I changed local variables to global ones.
n=100;
x=rand(1,n);
m=5;
xx=rand(1,m);
r = ones(n,m);
for i=1:n
for j=1:n
r(i,:) = r(i,:) .* (xx-x(j))/x(i)-x(j))
end
end
r = sum(r,1);
Instead of transforming a whole vector at once, let's try it with only one scalar. Also use the simplest element of x which depends on i and j. I also removed the sum in the end. We can add it back later.
n=100;
x=rand(1,n);
r = ones(n,1);
for i=1:n
for j=1:n
y = x(i)+x(j);
r(i) = r(i) * y;
end
end
The code above is the example function, I want to parallelize.
The inner loop always needs to access the same vector r(i) for one iteration of the outer loop i. This access is a write operation (*=), but the order doesn't matter for this operation.
Since nested parfor loops are not allowed in Matlab, I tried to pack everything in one parfor loop.
n=100;
x=rand(1,n);
r = ones(n,1);
parfor k=1:(n*n)
%i = floor((k-1)/n)+1; % outer loop
%j = mod(k-1,n)+1; % inner loop
[j,i] = ind2sub([n,n],k);
y = x(i)+x(j);
r(i) = r(i) * y; % ERROR here
end
Since indies are calculated, Matlab still doesn't know hot to slice it.
So, I decided to move the multiplication operation outside and use linear indices.
n=100;
x=rand(1,n);
r = ones(n,n);
parfor k=1:(n*n)
[j,i] = ind2sub([n,n],k);
y = x(i)+x(j);
r(k) = y;
end
r = prod(r,1);
r = squeeze(r); % remove singleton dimensions
While this does work for scalar values in the inner loop, it doesn't work for vectors in the inner loop since indices must be again calculated.
n=100;
x=rand(1,n);
m=5;
r = ones(n,n,m);
parfor k=1:(n*n)
[j,i] = ind2sub([n,n],k);
y = x(i)+x(j);
r((k-1)*m+1:k*m) = y.*(1:m); % ERROR here
end
r = prod(r,1);
r = squeeze(r); % remove singleton dimensions
Although it does work, when I reshape the array.
n=100;
x=rand(1,n);
m=5;
r = ones(n*n,m);
parfor k=1:(n*n)
[j,i] = ind2sub([n,n],k);
y = x(i)+x(j);
r(k,:) = y.*(1:m); % ERROR here
end
r = reshape(r,n,n,m);
r = prod(r,2);
r = squeeze(r); % remove singleton dimensions
This way, I can transform a vector xx to another vector r.
n=100;
x=rand(1,n);
m=5;
xx=rand(1,m);
r = ones(n*n,m);
parfor k=1:(n*n)
[j,i] = ind2sub([n,n],k);
y = x(i)+x(j);
r(k,:) = y.*xx; % ERROR here
end
r = reshape(r,n,n,m);
r = prod(r,2);
r = sum(r,1);
r = reshape(r,size(xx)); % reshape output vector to input vector
For my parallel solution, I need an n*n*m array instead of a n*m array which seems quite inefficient.
Is there a better way of doing what I want?
What are the advantages of other ways (prettier code, less CPU, less RAM, ...)?
UPDATE
In the order of trying to simplify the task and reduce it to the minimum working example of the problem, I omitted the check of i~=j to make it easier, although resulting in an all NaN result. Further, the nature of the code results in an all 1 result when adding this check. In order for the code to make sense, the factors are just weights for another vector z.
The more elaborate problem looks as follows:
n=100;
x=rand(1,n);
z=rand(1,n);
m=5;
xx=rand(1,m);
r = zeros(1,m);
for i=1:n
q = ones(1,m);
for j=1:n
if i~=j
q = q .* (xx-x(j))/(x(i)-x(j));
end
end
r = r + z(i) .* q;
end

This problem does not need any parallel for loop to execute. One problem is that x(i)-x(j) is redundandly calculated a lot of times. This is inefficient. The approach suggested calculates every number exactly once and it vectorize the operations for each element in xx. Since xx is the shortest vector by far it is almost completely vectorized. In case you want to vectorize the last loop as well this will probably just be like a hidden for loop as well, it will much more memory and the code would be more complicated (like 3D matrices and so). I took the freedom to switch minus to plus in the denominator just for testing. Minus would generate NaN for all numbers. The last approach is slightly faster. About 10 times for n=10000. I suggest you try a bit more elaborate benchmark.
function test()
% Initiate variables
n=100;
x=rand(1,n);
m=5;
xx=rand(1,m);
tic;
% Alternative 1
r = zeros(1,m);
for i=1:n
q = ones(1,m);
for j=1:n
q = q .* (xx-x(j))/(x(i)+x(j));
end
r = r + q;
end
toc;
tic;
% Alternative 2
xden = bsxfun(#plus, x, x.'); % Calculate denominator
xnom = repmat(x,n,1); % Calculate nominator
xfull = (xnom./xden).'; % calculate right term on rhs.
for (k = 1:m)
tmp= prod(xx(k)./xden - xfull); % Split in 2 calculations
r2(k) = sum(tmp); % "r = r + xx(k)"
end
toc;
disp(r);
disp(r2);
Just a note in the end. Alternative 2 is faster but it is also memory expensive, so in case of memory issues a loop is to prefer. Further, there is no need for global variables in case of parallelization. In case you need this you probably have to look over your design (but in case the code is short there is not some critical, so then you should not need to bother so much).

Related

Minimize (firstA_max - firstA_min) + (secondB_max - secondB_min)

Given n pairs of integers. Split into two subsets A and B to minimize sum(maximum difference among first values of A, maximum difference among second values of B).
Example : n = 4
{0, 0}; {5;5}; {1; 1}; {3; 4}
A = {{0; 0}; {1; 1}}
B = {{5; 5}; {3; 4}}
(maximum difference among first values of A, maximum difference among second values of B).
(maximum difference among first values of A) = fA_max - fA_min = 1 - 0 = 1
(maximum difference among second values of B) = sB_max - sB_min = 5 - 4 = 1
Therefore, the answer if 1 + 1 = 2. And this is the best way.
Obviously, maximum difference among the values equals to (maximum value - minimum value). Hence, what we need to do is find the minimum of (fA_max - fA_min) + (sB_max - sB_min)
Suppose the given array is arr[], first value if arr[].first and second value is arr[].second.
I think it is quite easy to solve this in quadratic complexity. You just need to sort the array by the first value. Then all the elements in subset A should be picked consecutively in the sorted array. So, you can loop for all ranges [L;R] of the sorted. Each range, try to add all elements in that range into subset A and add all the remains into subset B.
For more detail, this is my C++ code
int calc(pair<int, int> a[], int n){
int m = 1e9, M = -1e9, res = 2e9; //m and M are min and max of all the first values in subset A
for (int l = 1; l <= n; l++){
int g = m, G = M; //g and G are min and max of all the second values in subset B
for(int r = n; r >= l; r--) {
if (r - l + 1 < n){
res = min(res, a[r].first - a[l].first + G - g);
}
g = min(g, a[r].second);
G = max(G, a[r].second);
}
m = min(m, a[l].second);
M = max(M, a[l].second);
}
return res;
}
Now, I want to improve my algorithm down to loglinear complexity. Of course, sort the array by the first value. After that, if I fixed fA_min = a[i].first, then if the index i increase, the fA_max will increase while the (sB_max - sB_min) decrease.
But now I am still stuck here, is there any ways to solve this problem in loglinear complexity?
The following approach is an attempt to escape the n^2, using an argmin list for the second element of the tuples (lets say the y-part). Where the points are sorted regarding x.
One Observation is that there is an optimum solution where A includes index argmin[0] or argmin[n-1] or both.
in get_best_interval_min_max we focus once on including argmin[0] and the next smallest element on y and so one. The we do the same from the max element.
We get two dictionaries {(i,j):(profit, idx)}, telling us how much we gain in y when including points[i:j+1] in A, towards min or max on y. idx is the idx in the argmin array.
calculate the objective for each dict assuming max/min or y is not in A.
combine the results of both dictionaries, : (i1,j1): (v1, idx1) and (i2,j2): (v2, idx2). result : j2 - i1 + max_y - min_y - v1 - v2.
Constraint: idx1 < idx2. Because the indices in the argmin array can not intersect, otherwise some profit in y might be counted twice.
On average the dictionaries (dmin,dmax) are smaller than n, but in the worst case when x and y correlate [(i,i) for i in range(n)] they are exactly n, and we do not win any time. Anyhow on random instances this approach is much faster. Maybe someone can improve upon this.
import numpy as np
from random import randrange
import time
def get_best_interval_min_max(points):# sorted input according to x dim
L = len(points)
argmin_b = np.argsort([p[1] for p in points])
b_min,b_max = points[argmin_b[0]][1], points[argmin_b[L-1]][1]
arg = [argmin_b[0],argmin_b[0]]
res_min = dict()
for i in range(1,L):
res_min[tuple(arg)] = points[argmin_b[i]][1] - points[argmin_b[0]][1],i # the profit in b towards min
if arg[0] > argmin_b[i]: arg[0]=argmin_b[i]
elif arg[1] < argmin_b[i]: arg[1]=argmin_b[i]
arg = [argmin_b[L-1],argmin_b[L-1]]
res_max = dict()
for i in range(L-2,-1,-1):
res_max[tuple(arg)] = points[argmin_b[L-1]][1]-points[argmin_b[i]][1],i # the profit in b towards max
if arg[0]>argmin_b[i]: arg[0]=argmin_b[i]
elif arg[1]<argmin_b[i]: arg[1]=argmin_b[i]
# return the two dicts, difference along y,
return res_min, res_max, b_max-b_min
def argmin_algo(points):
# return the objective value, sets A and B, and the interval for A in points.
points.sort()
# get the profits for different intervals on the sorted array for max and min
dmin, dmax, y_diff = get_best_interval_min_max(points)
key = [None,None]
res_min = 2e9
# the best result when only the min/max b value is includes in A
for d in [dmin,dmax]:
for k,(v,i) in d.items():
res = points[k[1]][0]-points[k[0]][0] + y_diff - v
if res < res_min:
key = k
res_min = res
# combine the results for max and min.
for k1,(v1,i) in dmin.items():
for k2,(v2,j) in dmax.items():
if i > j: break # their argmin_b indices can not intersect!
idx_l, idx_h = min(k1[0], k2[0]), max(k1[1],k2[1]) # get index low and idx hight for combination
res = points[idx_h][0]-points[idx_l][0] -v1 -v2 + y_diff
if res < res_min:
key = (idx_l, idx_h) # new merged interval
res_min = res
return res_min, points[key[0]:key[1]+1], points[:key[0]]+points[key[1]+1:], key
def quadratic_algorithm(points):
points.sort()
m, M, res = 1e9, -1e9, 2e9
idx = (0,0)
for l in range(len(points)):
g, G = m, M
for r in range(len(points)-1,l-1,-1):
if r-l+1 < len(points):
res_n = points[r][0] - points[l][0] + G - g
if res_n < res:
res = res_n
idx = (l,r)
g = min(g, points[r][1])
G = max(G, points[r][1])
m = min(m, points[l][1])
M = max(M, points[l][1])
return res, points[idx[0]:idx[1]+1], points[:idx[0]]+points[idx[1]+1:], idx
# let's try it and compare running times to the quadratic_algorithm
# get some "random" points
c1=0
c2=0
for i in range(100):
points = [(randrange(100), randrange(100)) for i in range(1,200)]
points.sort() # sorted for x dimention
s = time.time()
r1 = argmin_algo(points)
e1 = time.time()
r2 = quadratic_algorithm(points)
e2 = time.time()
c1 += (e1-s)
c2 += (e2-e1)
if not r1[0] == r2[0]:
print(r1,r2)
raise Exception("Error, results are not equal")
print("time of argmin_algo", c1, "time of quadratic_algorithm",c2)
UPDATE: #Luka proved the algorithm described in this answer is not exact. But I will keep it here because it's a good performance heuristics and opens the way to many probabilistic methods.
I will describe a loglinear algorithm. I couldn't find a counter example. But I also couldn't find a proof :/
Let set A be ordered by first element and set B be ordered by second element. They are initially empty. Take floor(n/2) random points of your set of points and put in set A. Put the remaining points in set B. Define this as a partition.
Let's call a partition stable if you can't take an element of set A, put it in B and decrease the objective function and if you can't take an element of set B, put it in A and decrease the objective function. Otherwise, let's call the partition unstable.
For an unstable partition, the only moves that are interesting are the ones that take the first or the last element of A and move to B or take the first or the last element of B and move to A. So, we can find all interesting moves for a given unstable partition in O(1). If an interesting move decreases the objective function, do it. Go like that until the partition becomes stable. I conjecture that it takes at most O(n) moves for the partition to become stable. I also conjecture that at the moment the partition becomes stable, you will have a solution.

Array subsetting in Julia

With the Julia Language, I defined a function to sample points uniformly inside the sphere of radius 3.14 using rejection sampling as follows:
function spherical_sample(N::Int64)
# generate N points uniformly distributed inside sphere
# using rejection sampling:
points = pi*(2*rand(5*N,3).-1.0)
ind = sum(points.^2,dims=2) .<= pi^2
## ideally I wouldn't have to do this:
ind_ = dropdims(ind,dims=2)
return points[ind_,:][1:N,:]
end
I found a hack for subsetting arrays:
ind = sum(points.^2,dims=2) .<= pi^2
## ideally I wouldn't have to do this:
ind_ = dropdims(ind,dims=2)
But, in principle array indexing should be a one-liner. How could I do this better in Julia?
The problem is that you are creating a 2-dimensional index vector. You can avoid it by using eachrow:
ind = sum.(eachrow(points.^2)) .<= pi^2
So that your full answer would be:
function spherical_sample(N::Int64)
points = pi*(2*rand(5*N,3).-1.0)
ind = sum.(eachrow(points.^2)) .<= pi^2
return points[ind,:][1:N,:]
end
Here is a one-liner:
points[(sum(points.^2,dims=2) .<= pi^2)[:],:][1:N, :]
Note that [:] is dropping a dimension so the BitArray can be used for indexing.
This does not answer your question directly (as you already got two suggestions), but I rather thought to hint how you could implement the whole procedure differently if you want it to be efficient.
The first point is to avoid generating 5*N rows of data - the problem is that it is very likely that it will be not enough to generate N valid samples. The point is that the probability of a valid sample in your model is ~50%, so it is possible that there will not be enough points to choose from and [1:N, :] selection will throw an error.
Below is the code I would use that avoids this problem:
function spherical_sample(N::Integer) # no need to require Int64 only here
points = 2 .* pi .* rand(N, 3) .- 1.0 # note that all operations are vectorized to avoid excessive allocations
while N > 0 # we will run the code until we have N valid rows
v = #view points[N, :] # use view to avoid allocating
if sum(x -> x^2, v) <= pi^2 # sum accepts a transformation function as a first argument
N -= 1 # row is valid - move to the previous one
else
rand!(v) # row is invalid - resample it in place
#. v = 2 * pi * v - 1.0 # again - do the computation in place via broadcasting
end
end
return points
end
This one is pretty fast, and uses StaticArrays. You can probably also implement something similar with ordinary tuples:
using StaticArrays
function sphsample(N)
T = SVector{3, Float64}
v = Vector{T}(undef, N)
n = 1
while n <= N
p = rand(T) .- 0.5
#inbounds v[n] = p .* 2π
n += (sum(abs2, p) <= 0.25)
end
return v
end
On my laptop it is ~9x faster than the solution with views.

How to store value of inner for loop in nested for loop in an array in Matlab?

pvec = 1:3;
for i = 1:3
p=pvec(i);
for m = 1:p
erfun=erfc(5/(2*sqrt(p-m)));
suma(m) = sum(erfun)
end
end
I want to save sum of all values of erfun for every p i.e I want to have 3 values in final array but every value in the array should be sum of all the values of erfun for one p.
Similar questions have been addressed but I could not apply them in my case.
Minimal fix method (to your own code)
The following modification to your code will yield your requested results
suma = zeros(3,1);
pvec = 1:3;
for i = 1:3
p=pvec(i);
for m = 1:p
erfun=erfc(5/(2*sqrt(p-m)));
suma(i) = suma(i) + erfun; %// <-- modified here
end
end
Where I've also included suma = zeros(3,1), which I assume that you also have in your code (however not shown in your question); pre-allocating suma with sufficient entries.
Alternative method (arrayfun)
Another solution, you can make use of the arrayfun command to get rid of the inner for loop:
suma = zeros(1,3);
pvec = 1:3;
for i = 1:3
p=pvec(i);
suma(i) = sum(arrayfun(#(x) erfc(5/(2*sqrt(p-x))), 1:p));
end
Alternative method #2 (arrayfun)
An even more condensed solution, including also the purpose of the outer for loop in your arrayfun call:
suma = arrayfun(#(x) ...
sum(erfc(5./(2*sqrt(kron(x, ones(1,x-1)) - 1:(x-1))))), pvec)
Here we've made use of the kron command, which will be implicitly used in the arrayfun command above as follows
kron(1, []) = [] %// empty array
kron(2, [1]) = 2
kron(3, [1 1]) = [3 3]
and used the fact that erfc addition from 1/sqrt(0) is always 0 (i.e., erfc(Inf) = 0, and hence we needn't evaluate the case m=p as it yields no addition to our sum).
Result
All of the above methods yield the result
suma =
0
0.0004
0.0128

How to vectorize the antenna arrayfactor expression in matlab

I have the antenna array factor expression here:
I have coded the array factor expression as given below:
lambda = 1;
M = 100;N = 200; %an M x N array
dx = 0.3*lambda; %inter-element spacing in x direction
m = 1:M;
xm = (m - 0.5*(M+1))*dx; %element positions in x direction
dy = 0.4*lambda;
n = 1:N;
yn = (n - 0.5*(N+1))*dy;
thetaCount = 360; % no of theta values
thetaRes = 2*pi/thetaCount; % theta resolution
thetas = 0:thetaRes:2*pi-thetaRes; % theta values
phiCount = 180;
phiRes = pi/phiCount;
phis = -pi/2:phiRes:pi/2-phiRes;
cmpWeights = rand(N,M); %complex Weights
AF = zeros(phiCount,thetaCount); %Array factor
tic
for i = 1:phiCount
for j = 1:thetaCount
for p = 1:M
for q = 1:N
AF(i,j) = AF(i,j) + cmpWeights(q,p)*exp((2*pi*1j/lambda)*(xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i))));
end
end
end
end
How can I vectorize the code for calculating the Array Factor (AF).
I want the line:
AF(i,j) = AF(i,j) + cmpWeights(q,p)*exp((2*pi*1j/lambda)*(xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i))));
to be written in vectorized form (by modifying the for loop).
Approach #1: Full-throttle
The innermost nested loop generates this every iteration - cmpWeights(q,p)*exp((2*pi*1j/lambda)*(xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i)))), which are to summed up iteratively to give us the final output in AF.
Let's call the exp(.... part as B. Now, B basically has two parts, one is the scalar (2*pi*1j/lambda) and the other part
(xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i))) that is formed from the variables that are dependent on
the four iterators used in the original loopy versions - i,j,p,q. Let's call this other part as C for easy reference later on.
Let's put all that into perspective:
Loopy version had AF(i,j) = AF(i,j) + cmpWeights(q,p)*exp((2*pi*1j/lambda)*(xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i)))), which is now equivalent to AF(i,j) = AF(i,j) + cmpWeights(q,p)*B, where B = exp((2*pi*1j/lambda)*(xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i)))).
B could be simplified to B = exp((2*pi*1j/lambda)* C), where C = (xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i))).
C would depend on the iterators - i,j,p,q.
So, after porting onto a vectorized way, it would end up as this -
%// 1) Define vectors corresponding to iterators used in the loopy version
I = 1:phiCount;
J = 1:thetaCount;
P = 1:M;
Q = 1:N;
%// 2) Create vectorized version of C using all four vector iterators
mult1 = bsxfun(#times,sin(thetas(J)),cos(phis(I)).'); %//'
mult2 = bsxfun(#times,sin(thetas(J)),sin(phis(I)).'); %//'
mult1_xm = bsxfun(#times,mult1(:),permute(xm,[1 3 2]));
mult2_yn = bsxfun(#times,mult2(:),yn);
C_vect = bsxfun(#plus,mult1_xm,mult2_yn);
%// 3) Create vectorized version of B using vectorized C
B_vect = reshape(exp((2*pi*1j/lambda)*C_vect),phiCount*thetaCount,[]);
%// 4) Final output as matrix multiplication between vectorized versions of B and C
AF_vect = reshape(B_vect*cmpWeights(:),phiCount,thetaCount);
Approach #2: Less-memory intensive
This second approach would reduce the memory traffic and it uses the distributive property of exponential - exp(A+B) = exp(A)*exp(B).
Now, the original loopy version was this -
AF(i,j) = AF(i,j) + cmpWeights(q,p)*exp((2*pi*1j/lambda)*...
(xm(p)*sin(thetas(j))*cos(phis(i)) + yn(q)*sin(thetas(j))*sin(phis(i))))
So, after using the distributive property, we would endup with something like this -
K = (2*pi*1j/lambda)
part1 = K*xm(p)*sin(thetas(j))*cos(phis(i));
part2 = K*yn(q)*sin(thetas(j))*sin(phis(i));
AF(i,j) = AF(i,j) + cmpWeights(q,p)*exp(part1)*exp(part2);
Thus, the relevant vectorized approach would become something like this -
%// 1) Define vectors corresponding to iterators used in the loopy version
I = 1:phiCount;
J = 1:thetaCount;
P = 1:M;
Q = 1:N;
%// 2) Define the constant used at the start of EXP() call
K = (2*pi*1j/lambda);
%// 3) Perform the sine-cosine operations part1 & part2 in vectorized manners
mult1 = K*bsxfun(#times,sin(thetas(J)),cos(phis(I)).'); %//'
mult2 = K*bsxfun(#times,sin(thetas(J)),sin(phis(I)).'); %//'
%// Perform exp(part1) & exp(part2) in vectorized manners
part1_vect = exp(bsxfun(#times,mult1(:),xm));
part2_vect = exp(bsxfun(#times,mult2(:),yn));
%// Perform multiplications with cmpWeights for final output
AF = reshape(sum((part1_vect*cmpWeights.').*part2_vect,2),phiCount,[])
Quick Benchmarking
Here are the runtimes with the input data listed in the question for the original loopy approach and proposed approach #2 -
---------------------------- With Original Approach
Elapsed time is 358.081507 seconds.
---------------------------- With Proposed Approach #2
Elapsed time is 0.405038 seconds.
The runtimes suggests a crazy performance improvement with Approach #2!
The basic trick is to figure out what things are constant, and what things depend on the subscript term - and therefore are matrix terms.
Within the sum:
C(n,m) is a matrix
2π/λ is a constant
sin(θ)cos(φ) is a constant
x(m) and y(n) are vectors
So the two things I would do are:
Expand the xm and ym into matrices using meshgrid()
Take all the constant term stuff outside the loop.
Like this:
...
piFactor = 2 * pi * 1j / lambda;
[xgrid, ygrid] = meshgrid(xm, ym); % xgrid and ygrid will be size (N, M)
for i = 1:phiCount
for j = 1:thetaCount
xFactor = sin(thetas(j)) * cos(phis(i));
yFactor = sin(thetas(j)) * sin(phis(i));
expFactor = exp(piFactor * (xgrid * xFactor + ygrid * yFactor)); % expFactor is size (N, M)
elements = cmpWeights .* expFactor; % elements of sum, size (N, M)
AF(i, j) = AF(i, j) + sum(elements(:)); % sum and then integrate.
end
end
You could probably figure out how to vectorise the outer loop too, but hopefully that gives you a starting point.

Sum of bi-dimensional array indices from single index

I am trying to create a parallel-for loop in MATLAB to fill an mxm matrix as follows:
parfor i = 1 : m^2
A(i) = sum( x .^ %Sum of indices% );
end
If I was doing it without parfor I would simply do:
for i = 1 : m
for j = 1 : m
A(i,j) = sum( x.^(i+j) );
end
end
However unfortunately parfor does not allow for other indices within it's body and therefore I cannot do what I'd like to do.
I have tried extracting the sum of the indices by using ceil(i/3) + mod(i,3) however this doesn't work, because at i=3 it gives me 1 instead of 4 as I want.
Does anyone have a better way of extracting the sum of the indices?
r=sum( bsxfun(#power, x(:), m+1:2*m ) ,1);
c=sum( bsxfun(#power, x(:), m+1:-1:2 ) ,1);
A=fliplr(toeplitz(c,r));
[I,J]=ndgrid(1:m);
K=I+J;
A=zeros(m);
parfor i=1:m^2
A(i)=sum(x.^(K(i)));
end
Here is another vectorized variation:
[I,J] = ndgrid(1:m, 1:m);
K = I + J;
A = reshape(sum(bsxfun(#power, x, K(:).')), [m m]);

Resources