I am new to julia, and I am trying to take the irfft of B, which is a 3d array of size (n/2, n, n) where B = rfft(A). However, the irfft in julia reqires an additional input d for the size of the transformed real array, and I'm unsure of what to put. I tried n and n/2, but both did not seem to work as expected when I printed the resulting matrix out.
EDIT: I should've lowered my dimensions to check if everything was working, turns out using d = n is ok. Thanks to everyone who answered!
Check out this discussion. Presumably any triple of numbers will do the trick, but may or may not give you what you want.
This should work:
using FFTW
function test(n = 16)
a = rand(n ÷ 2, n, n)
f = rfft(a)
#show irfft(f, n ÷ 2 + 1)
end
test()
Related
I have a problem with the runtime of my code. The only module that is really slow is my for-loop over every matrix element in a (144, 208)-array.
I have to check every element if the condition is fulfilled and if so, i have to perform several actions like shifting another (144, 208)-array and add it to an existing one.
Is this not changeable or is my implementation way too beginner-like?
Here is my code:
# With this codeblock i am loading a specific image into python and
binarize it
g = Initialization()
b_init = g.initialize_grid(".\\geometries\\1.png")
# this function will modify the matrix m_sp, which i load in as csv.file
def expand_blockavg(x, h, w):
m, n = x.shape
return np.broadcast_to((x/float(h*w))[:, None, :, None], (m, h, n, w)).reshape(m*h, -1)
m_adapt = expand_blockavg(m_sp, 16, 16) / 256
# This is my actual calculation block
for index, x in np.ndenumerate(b_init):
if x == 1:
a = np.asarray(index)
y = np.subtract(a, index_default)
m_shift = shift(m_adapt, (y[0], y[1]), cval=0)
b = np.add(m_shift, b)
SO, the last block (calculation) is what takes so long. I know that the loop has to check 30k elements. But i thought that with numpy it will be faster.
Can some1 tell me if there's potential for optimization or do i have to live with the fact that it'll take so long.
thanks
Iteration in python is very slow compared to vectorized numpy operations.
An immediate optimization is to only iterate over the indices where the matrix is 1, rather than checking each index. Do this with:
indices = np.argwhere(b_init == 1)
for a in indices:
y = np.array(a) - index_default
m_shift = shift(m_adapt, y[:2], cval=0)
b += m_shift
Not knowing the details of shift it’s hard to say if you can vectorize that also. I replaced function calls with equivalent operations which should be faster; np.add etc. are mostly useful when the operation is being selected programmatically.
Assume we have a 3 dimensional array F and 2 dimensional matrix S.
First I find a matrix Y which is F multiplied by S. Then I try to find an estimate of F (lets call it F_est) from Y as sanity check in my code.
Can anyone see a flaw in logic, I dont seem to know why F_est is not exactly F.
F= randn(2,4,600);
S= randn(4,600);
for i =1:size(F,1);
for j=1:size(F,2)
for k= 1:size(F,3)
Y(i,k)= F(i,j,k) * S(j,k);
end
end
end
for i =1:size(F,1)
for j=1:size(F,2)
for k= 1:size(F,3)
F_est(i,j,k)= Y(i,k) / S(j,k);
end
end
end
then I try to see if F_est - F is zero and it is not. Any ideas. Much aprreciated.
**** EDIT after comments
Based on the answers I got I am wondering if the code below makes any sense?
for k=1:size(F,3)
Y(:,k) = squeeze(F(:,:,k)* S(:,k)
end
Am I able to recover F if I have Y and S?
When you create Y, you are replacing its values continuously. For any value of pair of i,k you are overwriting Y jtimes!
Those 2 codes are not equivalent, as F_est(i,j,k) computed only once, but you have Y(i,k) j times.
I don't know what you are trying to do, but a multiplication of a 3D matrix by a 2D matrix is not defined, and its not a 2D matrix
I'm writing a piece of code for submission through an online grader, as showcased below. B is some given array filled any/all integers 1 through K, and I want to extract the corresponding logical indices of matrix X and perform some operations on those elements, to be put into a return array:
for i = 1:K
A = X(B == i, :);
returnArr(i, :) = sum(A) / length(A);
end
This did not pass the grader at all, and so I looked to change my approach, instead indexing array X indirectly via first using the "find" function, as below:
for i = 1:K
C = find(B == i);
returnArr(i,:) = sum(X(C,:)) / length(C);
end
To my surprise, this code passed the grader without any issues. I know there are a plethora of variations between graders, and one might handle a particular function differently than another, but from a MATLAB functionality/coding perspective, what am I missing in terms of discrepancies between the two approaches? Thanks!
I think the problem is that:
length(C) == sum(B == i)
while
length(A) == max([sum(B == i) , size(X , 2)])
In other words, to obtain the same result of the second example with the first one, you should modify it like this:
A = X(B == i , :);
returnArr(i, :) = sum(A) / size(A,1);
The function length returns the length of largest array dimension
I'm writing a plugin for R and I want to allocate a 3-dimensional R matrix to return. How can I do this? In Rinternals.h I see an allocMatrix and an alloc3DArray. Do I use one of those?
If it is too annoying, I can accept a matrix from the user, but I need to know what the internal representation is so that I can fill it in.
Thank you.
Two problems seem at issue. One is validating input from a user and the other is allocation. I would be surprised if it were very much faster to use the .Call interface or an rcpp strategy than just allocation with :
obj <- array(NA, dim=c(x,y,z)) # where the x,y and z values would be user input.
If you look at the code for array you see this as the likely workhorse function:
if (is.atomic(data) && !is.object(data))
return(.Internal(array(data, dim, dimnames)))
It's worth understanding that arrays in R are really just vectors with a dimension attribute set:
> x <- array(0, c(2, 2, 2))
> .Internal(inspect(x))
#7f859baf5ee8 14 REALSXP g0c4 [NAM(2),ATT] (len=8, tl=0) 0,0,0,0,0,...
ATTRIB:
#7f85a1d593c0 02 LISTSXP g0c0 []
TAG: #7f859c8043f8 01 SYMSXP g1c0 [MARK,LCK,gp=0x4000] "dim" (has value)
#7f85a4040bc0 13 INTSXP g0c2 [NAM(2)] (len=3, tl=0) 2,2,2
So if you want to make a matrix, or array, 'by hand', it's as simple as allocating a vector of the correct length, and then setting the dimension attribute. Eg:
SEXP myArray = PROTECT(allocVector(REALSXP, m * n * k));
SEXP myDims = PROTECT(allocVector(INTSXP, 3));
setAttrib(myArray, R_DimSymbol, myDims);
This is another step of my battle with multi-dimensional arrays in R, previous question is here :)
I have a big R array with the following dimensions:
> data = array(..., dim = c(x, y, N, value))
I'd like to perform a sort of bootstrap comparing the mean (see here for a discussion about it) obtained with:
> vmean = apply(data, c(1,2,3), mean)
With the mean obtained sampling the N values randomly with replacement, to explain better if data[1,1,,1] is equals to [v1 v2 v3 ... vN] I'd like to replace it with something like [v_k1 v_k2 v_k3 ... v_kN] with k values sampled with sample(N, N, replace = T).
Of course I want to AVOID a for loop. I've read this but I don't know how to perform an efficient indexing of this array avoiding a loop through x and y.
Any ideas?
UPDATE: the important thing here is that I want a different sample for each sample in the fourth (value) dimension, otherwise it would be simple to do something like:
> dataSample = data[,,sample(N, N, replace = T), ]
Also there's the compiler package which speeds up for loops by using a Just In Time compiler.
Adding thes lines at the top of your code enables the compiler for all code.
require("compiler")
compilePKGS(enable=T)
enableJIT(3)
setCompilerOptions(suppressAll=T)