Stacking copies of an array/ a torch tensor efficiently? - arrays

I'm a Python/Pytorch user. First, in numpy, let's say I have an array M of size LxL, and i want to have the following
array: A=(M,...,M) of size, say, NxLxL, is there a more elegant/memory efficient way of doing it than :
A=np.array([M]*N) ?
Same question with torch tensor !
Cause, Now, if M is a Variable(torch.tensor), i have to do:
A=torch.autograd.Variable(torch.tensor(np.array([M]*N)))
which is ugly !

Note, that you need to decide whether you would like to allocate new memory for your expanded array or whether you simply require a new view of the existing memory of the original array.
In PyTorch, this distinction gives rise to the two methods expand() and repeat(). The former only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. In contrast, the latter copies the original data and allocates new memory.
In PyTorch, you can use expand() and repeat() as follows for your purposes:
import torch
L = 10
N = 20
A = torch.randn(L,L)
A.expand(N, L, L) # specifies new size
A.repeat(N,1,1) # specifies number of copies
In Numpy, there are a multitude of ways to achieve what you did above in a more elegant and efficient manner. For your particular purpose, I would recommend np.tile() over np.repeat(), since np.repeat() is designed to operate on the particular elements of an array, while np.tile() is designed to operate on the entire array. Hence,
import numpy as np
L = 10
N = 20
A = np.random.rand(L,L)
np.tile(A,(N, 1, 1))

If you don't mind creating new memory:
In numpy, you can use np.repeat() or np.tile(). With efficiency in mind, you should choose the one which organises the memory for your purposes, rather than re-arranging after the fact:
np.repeat([1, 2], 2) == [1, 1, 2, 2]
np.tile([1, 2], 2) == [1, 2, 1, 2]
In pytorch, you can use tensor.repeat(). Note: This matches np.tile, not np.repeat.
If you don't want to create new memory:
In numpy, you can use np.broadcast_to(). This creates a readonly view of the memory.
In pytorch, you can use tensor.expand(). This creates an editable view of the memory, so operations like += will have weird effects.

In numpy repeat is faster:
np.repeat(M[None,...], N,0)
I expand the dimensions of the M, and then repeat along that new dimension.

Related

Pre-allocation of array of array

In julia, one can pre-allocate an array of a given type and dims with
A = Array{<type>}(undef,<dims>)
example for a 10x10 matrix of floats
A = Array{Float64,2}(undef,10,10)
However, for array of array pre-allocation, it does not seem to be possible to provide a pre-allocation for the underlying arrays.
For instance, if I want to initialize a vector of n matrices of complex floats I can only figure this syntax,
A = Vector{Array{ComplexF64,2}}(undef, n)
but how could I preallocate the size of each Array in the vector, except with a loop afterwards ? I tried e.g.
A = Vector{Array{ComplexF64,2}(undef,10,10)}(undef, n)
which obviously does not work.
Remember that "allocate" means "give me a contiguous chunk of memory, of size exactly blah". For an array of arrays, which is really a contiguous chunk of pointers to other contiguous chunks, this doesn't really make sense in general as a combined operation -- the latter chunks might just totally differ.
However, by stating your problem, you make clear that you actually have more structural information: you know that you have n 10x10 arrays. This really is a 3D array, conceptually:
A = Array{Float64}(undef, n, 10, 10)
At that point, you can just take slices, or better: views along the first axis, if you need an array of them:
[#view A[i, :, :] for i in axes(A, 1)]
This is a length n array of AbstractArrays that in all respects behave like the individual 10x10 arrays you wanted.
In the cases like you have described you need to use comprehension:
a = [Matrix{ComplexF64}(undef, 2,3) for _ in 1:4]
This allocates a Vector of Arrays. In Julia's comprehension you can iterate over more dimensions so higher dimensionality is also available.

Seg faulting with 4D arrays & initializing dynamic arrays

I ran into a big of a problem with a tetris program I'm writing currently in C.
I am trying to use a 4D multi-dimensional array e.g.
uint8_t shape[7][4][4][4]
but I keep getting seg faults when I try that, I've read around and it seems to be that I'm using up all the stack memory with this kind of array (all I'm doing is filling the array with 0s and 1s to depict a shape so I'm not inputting a ridiculously high number or something).
Here is a version of it (on pastebin because as you can imagine its very ugly and long).
If I make the array smaller it seems to work but I'm trying to avoid a way around it as theoretically each "shape" represents a rotation as well.
https://pastebin.com/57JVMN20
I've read that you should use dynamic arrays so they end up on the heap but then I run into the issue how someone would initialize a dynamic array in such a way as linked above. It seems like it would be a headache as I would have to go through loops and specifically handle each shape?
I would also be grateful for anybody to let me pick their brain on dynamic arrays how best to go about them and if it's even worth doing normal arrays at all.
Even though I have not understood why do you use 4D arrays to store shapes for a tetris game, and I agree with bolov's comment that such an array should not overflow the stack (7*4*4*4*1 = 448 bytes), so you should probably check other code you wrote.
Now, to your question on how to manage 4D (N-Dimensional)dynamically sized arrays. You can do this in two ways:
The first way consists in creating an array of (N-1)-Dimensional arrays. If N = 2 (a table) you end up with a "linearized" version of the table (a normal array) which dimension is equal to R * C where R is the number of rows and C the number of columns. Inductively speaking, you can do the very same thing for N-Dimensional arrays without too much effort. This method has some drawbacks though:
You need to know beforehand all the dimensions except one (the "latest") and all the dimensions are fixed. Back to the N = 2 example: if you use this method on a table of C columns and R rows, you can change the number of rows by allocating C * sizeof(<your_array_type>) more bytes at the end of the preallocated space, but not the number of columns (not without rebuilding the entire linearized array). Moreover, different rows must have the same number of columns C (you cannot have a 2D array that looks like a triangle when drawn on paper, just to get things clear).
You need to carefully manage the indicies: you cannot simply write my_array[row][column], instead you must access that array with my_array[row*C + column]. If N is not 2, then this formula gets... interesting
You can use N-1 arrays of pointers. That's my favourite solution because it does not have any of the drawbacks from the previous solution, although you need to manage pointers to pointers to pointers to .... to pointers to a type (but that's what you do when you access my_array[7][4][4][4].
Solution 1
Let's say you want to build an N-Dimensional array in C using the first solution.
You know the length of each dimension of the array up to the (N-1)-th (let's call them d_1, d_2, ..., d_(N-1)). We can build this inductively:
We know how to build a dynamic 1-dimensional array
Supposing we know how to build a (N-1)-dimensional array, we show that we can build a N-Dimensional array by putting each (N-1)-dimensional array we have available in a 1-Dimensional array, thus increasing the available dimensions by 1.
Let's also assume that the data type that the arrays must hold is called T.
Let's suppose we want to create an array with R (N-1)-dimensional arrays inside it. For that we need to know the size of each (N-1)-dimensional array, so we need to calculate it.
For N = 1 the size is just sizeof(T)
For N = 2 the size is d_1 * sizeof(T)
For N = 3 the size is d_2 * d_1 * sizeof(T)
You can easily inductively prove that the number of bytes required to store R (N-1)-dimensional arrays is R*(d_1 * d_2 * ... * d_(n-1) * sizeof(T)). And that's done.
Now, we need to access a random element inside this massive N-dimensional array. Let's say we want to access the item with indicies (i_1, i_2, ..., i_N). For this we are going to repeat the inductive reasoning:
For N = 1, the index of the i_1 element is just my_array[i_1]
For N = 2, the index of the (i_1, i_2) element can be calculated by thinking that each d_1 elements, a new array begins, so the element is my_array[i_1 * d_1 + i_2].
For N = 3, we can repeat the same process and end up having the element my_array[d_2 * ((i_1 * d_1) + i_2) + i_3]
And so on.
Solution 2
The second solution wastes a bit more memory, but it's more straightforward, both to understand and to implement.
Let's just stick to the N = 2 case so that we can think better. Imagine to have a table and to split it row by row and to place each row in its own memory slot. Now, a row is a 1-dimensional array, and to make a 2-dimensional array we only need to be able to have an ordered array with references to each row. Something like the following drawing shows (the last row is the R-th row):
+------+
| R1 -------> [1,2,3,4]
|------|
| R2 -------> [2,4,6,8]
|------|
| R3 -------> [3,6,9,12]
|------|
| .... |
|------|
| RR -------> [R, 2*R, 3*R, 4*R]
+------+
In order to do that, you need to first allocate the references array (R elements long) and then, iterate through this array and assign to each entry the pointer to a newly allocated memory area of size d_1.
We can easily extend this for N dimensions. Simply build a R dimensional array and, for each entry in this array, allocate a new 1-Dimensional array of size d_(N-1) and do the same for the newly created array until you get to the array with size d_1.
Notice how you can easily access each element by simply using the expression my_array[i_1][i_2][i_3]...[i_N].
For example, let's suppose N = 3 and T is uint8_t and that d_1, d_2 and d_3 are known (and not uninitialized) in the following code:
size_t d1 = 5, d2 = 7, d3 = 3;
int ***my_array;
my_array = malloc(d1 * sizeof(int**));
for(size_t x = 0; x<d1; x++){
my_array[x] = malloc(d2 * sizeof(int*));
for (size_t y = 0; y < d2; y++){
my_array[x][y] = malloc(d3 * sizeof(int));
}
}
//Accessing a random element
size_t x1 = 2, y1 = 6, z1 = 1;
my_array[x1][y1][z1] = 32;
I hope this helps. Please feel free to comment if you have questions.

Creating an Array based on range from existing array

Say I have a std::array
std::array<int,8> foo = {1,2,3,4,5,6,7,8};
Now Is it possible to create a new array from an existing array using a range say from index to 2 till 5. So my new array will have items {3,4,5,6}.
I am aware that I could accomplish this using the manual for loop copy mechanism but I wanted to know if there was a faster way of doing that
If you are expecting some easy syntax (like Python, Matlab or Fortran), no.
As #Sphinx said you can use copy.
std::array<int,8> foo = {1,2,3,4,5,6,7,8};
std::array<int,3> foo2;
std::copy(&foo[2], &foo[5], foo2.begin());
// or std::copy(foo.begin() + 2, foo.begin() + 5, foo2.begin());
but take into account that std::array sizes are compile time constants.
So you may need std::vector<int> if you want make the range size variable.

Why using repmat() for expanding array?

I want to load a csv file to Matlab using testread(), since the data in it has more than 2 million records, so I should preallocate the array for those data.
Suppose I cannot know the exact length of arrays, the docs of MATLAB v6.5 recommend me to use repmat() for my expanding array. The original words in the doc is below:
"In cases where you cannot preallocate, see if you can increase the
size of your array using the repmat function. repmat tries to get you
a contiguous block of memory for your expanding array".
I really don't know how to use the repmat for expanding?
Does it mean by estimating a rough number of the length for repmat() to preallocating, and then remove the empty elements?
If so, how is that different from preallocating using zeros() or cell()?
The documentation also says:
When you preallocate a block of memory to hold a matrix of some type
other than double, it is more memory efficient and sometimes faster to
use the repmat function for this.
The statement below uses zeros to preallocate a 100-by-100 matrix of
uint8. It does this by first creating a full matrix of doubles, and
then converting the matrix to uint8. This costs time and uses memory
unnecessarily.
A = int8(zeros(100));
Using repmat, you create only one double, thus reducing your memory
needs.
A = repmat(int8(0), 100, 100);
Therefore, the advantage is if you want a datatype other than doubles, you can use repmat to replicate a non-double datatype.
Also see: http://undocumentedmatlab.com/blog/preallocation-performance, which suggests:
data1(1000,3000) = 0
instead of:
data1 = zeros(1000,3000)
to avoid initialisation of other elements.
As for dynamic resizing, repmat can be used to concisely double the size of your array (a common method which results in amortized O(1) appends for each element):
data = [0];
i = 1;
while another element
...
if i > numel(data)
data = repmat(data,1,2); % doubles the size of data
end
data(i) = element
i = i + 1;
end
And yes, after you have gathered all your elements, you can resize the array to remove empty elements at the end.

How to "invert" an array in linear time functionally rather than procedurally?

Say I have an array of integers A such that A[i] = j, and I want to "invert it"; that is, to create another array of integers B such that B[j] = i.
This is trivial to do procedurally in linear time in any language; here's a Python example:
def invert_procedurally(A):
B = [None] * (max(A) + 1)
for i, j in enumerate(A):
B[j] = i
return B
However, is there any way to do this functionally (as in functional programming, using map, reduce, or functions like those) in linear time?
The code might look something like this:
def invert_functionally(A):
# We can't modify variables in FP; we can only return a value
return map(???, A) # What goes here?
If this is not possible, what is the best (most efficient) alternative when doing functional programming?
In this context are arrays mutable or immutable? Generally I'd expect the mutable case to be about as straightforward as your Python implementation, perhaps aside from a few wrinkles with types. I'll assume you're more interested in the immutable scenario.
This operation inverts the indices and elements, so it's also important to know something about what constitutes valid array indices and impose those same constraints on the elements. Haskell has a class for index constraints called Ix. Any Ix type is ordered and has a range implementation to make an ordered list of indices ranging from one specified index to another. I think this Haskell implementation does what you want.
import Data.Array.IArray
invertArray :: (Ix x) => Array x x -> Array x x
invertArray arr = listArray (low,high) newElems
where oldElems = elems arr
newElems = indices arr
low = minimum oldElems
high = maximum oldElems
Under the hood listArray uses zipWith and range to associate indices in the specified range to the listed elements. That part ought to be linear time, and so is the one-time operation of extracting elements and indices from an array.
Whenever the sets of the input arrays indices and elements differ some elements will be undefined, which for better or worse blow up faster than Python's None. I believe you could overcome the undefined issue by implementing new Ix a instances over the Maybe monad, for instance.
Quick side-note: check out the invPerm example in the Haskell 98 Library Report. It does something similar to invertArray, but assumes up front that input array's elements are a permutation of its indices.
A solution needing mapand 3 operations:
toTuples views an the array as a list of tuples (i,e) where i is the index and e the element in the array at that index.
fromTuples creates and loads an array from a list of tuples.
swap which takes a tuple (a,b) and returns (b,a)
Hence the solution would be (in Haskellish notation):
invert = fromTuples . map swap . toTuples

Resources