Basically, I want to sample 9 independent realizations for the uniform distribution U(0,1) 2578 times, and this works fine, using either
replicate(2578,{runif(9,0,1)})
or
F=c()
for (i in 1:2578){
F[i,]=runif(9,0,1)
}
Now I want this to be repeated let's say 10 times, i.e. creating 10 new 2578x9 samples. I want to create a multidimensional array, or to visualize it better a rectangular parallelepiped with length 9, height 2578, and width whatever (10, 1000, 100000, ...). How can I achieve this?
I think your simulated data could benefit from being structured directly into an array: that would make them much easier to handle:
dims <- c(2578, 9, 100)
tmp <- runif(prod(dims))
A <- array(tmp, dims)
Related
New to julia, so this is probably very easy.
I have an n-by-m array and a vector of length n and want to repeat each row of the array the number of times in the corresponding element of the vector. For example:
mat = rand(3,6)
v = vec([2 3 1])
The result should be a 6-by-6 array. I tried the repeat function but
repeat(mat, inner = v)
yields a 6×18×1 Array{Float64,3}: array instead so it takes v to be the dimensions along which to repeat the elements. In matlab I would use repelem(mat, v, 1) and I hope julia offers something similar. My actual matrix is a lot bigger and I will have to call the function many times, so this operation needs to be as fast as possible.
It has been discussed to add a similar thing to Julia Base, but currently it is not implemented yet AFAIK. You can achieve what you want using the inverse_rle function from StatsBase.jl:
julia> row_idx = inverse_rle(axes(v, 1), v)
6-element Array{Int64,1}:
1
1
2
2
2
3
and now you can write:
mat[row_idx, :]
or
#view mat[row_idx, :]
(the second option creates a view which might be relevant in your use case if you say that your mat is large and you need to do such indexing many times - which option is faster will depend on your exact use case).
I have an array of matrices.
dims <- c(10000,5,5)
mat_array <- array(rnorm(prod(dims)), dims)
I would like to perform a matrix-based operation (e.g. inversion via the solve function) on each matrix, but preserve the full structure of the array.
So far, I have come up with 3 options:
Option 1: A loop, which does exactly what I want, but is clunky and inefficient.
mat_inv <- array(NA, dims)
for(i in 1:dims[1]) mat_inv[i,,] <- solve(mat_array[i,,])
Option 2: The apply function, which is faster and cleaner, BUT squishes each matrix down to a vector.
mat_inv <- apply(mat_array, 1, solve)
dim(mat_inv)
[1] 25 10000
I know I can set the output dimensions to match those of the input, but I'm wary of doing this and messing up the indexing, especially if I had to apply over non-adjacent dimensions (e.g. if I wanted to invert across dimension 2).
Option 3: The aaply function from the plyr package, which does exactly what I want, but is MUCH slower (4-5x) than the others.
mat_inv <- plyr::aaply(mat_array, 1, solve)
Are there any options that combine the speed of base::apply with the versatility of plyr::aaply?
Consider that I have a vector/array such that it looks as follows:
each part is a sub array of some size fixed and known size (that can only be accessed through indexing, i.e. its not a tensor nor a higher order array). So for example:
x1 = x(1:d);
if d is the size of each sub array. The size of each sub array is the same but it might vary depending on the current x we are considering. However, we do know n (the number of sub arrays) and d (the size of all of the sub arrays).
I know there is usually really strange but useful tricks in matlab to do things more optimized. Is there a way to extract those using maybe indexing and and make a matrix where the rows (or columns) are those parts? as in:
X = [x_1, ..., x_n]
the caveat is that n is a variable and we don't know aprior what it is. We can find what n is, but its not fixed.
I want to minimize the amount of for loops I actually write in matlab to hope its faster...just to add some more context.
First I would consider simple reshaping to keep the output as a simple double matrix
x = (1:15).' %'
d = 3;
out = reshape(x,d,[])
and further on just use indexing to access the columns out(:,idx);
There is no need to know n in advance, as reshape is calculating it based on d and the number of elements in x.
out =
1 4 7 10 13
2 5 8 11 14
3 6 9 12 15
If you'd insist on something like cell arrays, use accumarray with ceil to get the subs:
out = accumarray( ceil( (1:numel(x))/d ).', x(:), [], #(x) {x})
I'm working on a fishery stock assessment model and want to speed it up by removing a loop (actually two loops of the same form).
I have an array, A, dim(A)=[L,L,Y], and a matrix, M, dim(M)=[L,Y].
These are used to make a matrix, mat, dim(mat)=[L,Y], by calculating matrix products. My loop looks like:
for(i in 1:Y){
mat[,i]<-(A[,,i]%*%M[,i])[,1]}
Can anyone help me out? I really need a speed gain.
Also, (don't know if it'll make a difference but) each A[,,i] matrix is lower triangular.
I'm pretty sure this will give you the results you want. Since there is no reproducible example, I can't be absolutely sure. Had to trace some of the linear algebra logic to see what you are trying to accomplish.
library(plyr) # We need this to split the array into a list of 9 matrices
B = lapply(alply(A, 3), function(x) (x%*%M)) # Perform 9 linear algebra multiplications
sapply(1:9, function(i) (B[[i]])[,i]) # Extract the 9 columns you actually want.
I used the following test data:
A = array(rnorm(225), dim = c(5,5,9))
M = matrix(rnorm(45), nrow = 5, ncol = 9)
Lets say that I have an array, foo, in R that has dimensions == c(150, 40, 30).
Now, if I:
bar <- apply(foo, 3, rbind)
dim(bar) is now c(6000, 30).
What is the most elegant and generic way to invert this procedure and go from bar to foo so that they are identical?
The trouble isn't getting the dimensions right, but getting the data back in the same order, within it's respected, original, dimension.
Thank you for taking the time, I look forward to your responses.
P.S. For those thinking that this is part of a larger problem, it is, and no, I cannot use plyr, quite yet.
I think you can just call array again and specify the original dimensions:
m <- array(1:210,dim = c(5,6,7))
m1 <- apply(m, 3, rbind)
m2 <- array(as.vector(m1),dim = c(5,6,7))
all.equal(m,m2)
[1] TRUE
I'm wondering about your initial transformation. You call rbind from apply, but that won't do anything - you could just as well have called identity!
foo <- array(seq(150*40*30), c(150, 40, 30))
bar <- apply(foo, 3, rbind)
bar2 <- apply(foo, 3, identity)
identical(bar, bar2) # TRUE
So, what is it you really wanted to accomplish? I was under the assumption that you had a couple (30) matrix slices and wanted to stack them and then unstack them again. If so, the code would be more involved than #joran suggested. You need some calls to aperm (as #Patrick Burns suggested):
# Make a sample 3 dimensional array (two 4x3 matrix slices):
m <- array(1:24, 4:2)
# Stack the matrix slices on top of each other
m2 <- matrix(aperm(m, c(1,3,2)), ncol=ncol(m))
# Reverse the process
m3 <- aperm(array(m2, c(nrow(m),dim(m)[[3]],ncol(m))), c(1,3,2))
identical(m3,m) # TRUE
In any case, aperm is really powerful (and somewhat confusing). Well worth learning...