I am trying to efficiently index a 2D array in Python and have the problem that it is really slow.
This is what I tried (simplified example):
xSize = veryBigNumber
ySize = veryBigNumber
a = np.ones((xSize,ySize))
N = veryBigNumber
const = 1
for t in range(N):
for i in range(xSize):
for j in range(ySize):
a[i,j] *= f(i,j)*const # f(i,j) is an arbitrary function of i and j.
Now I would like to substitute the nested loop by something more efficient. How do I do this?
Your 2D array could be produced using the following addition:
np.arange(200)[:,np.newaxis] + np.arange(200)
This type of vectorised operation is likely to be very fast:
>>> %timeit np.arange(200)[:,np.newaxis] + np.arange(200)
1000 loops, best of 3: 178 µs per loop
This method in not limited to addition. We can use the two arrays in the above operation as the arguments of any universal function (commonly abbreviated to ufunc).
For example:
>>> np.multiply(np.arange(5)[:,np.newaxis], np.arange(5))
array([[ 0, 0, 0, 0, 0],
[ 0, 1, 2, 3, 4],
[ 0, 2, 4, 6, 8],
[ 0, 3, 6, 9, 12],
[ 0, 4, 8, 12, 16]])
NumPy has built in ufuncs for all the basic arithmetic operations and some more interesting ones too. If you need a more exotic function, NumPy allows you to make your own ufunc.
Edit: To quickly explain the broadcasting happening in this method; you can think of it like this...
np.arange(5) produces 1D array which looks like this:
array([0, 1, 2, 3, 4])
The code np.arange(5)[:,np.newaxis] adds a second dimension (columns) to the range, producing this 2D array:
array([[0],
[1],
[2],
[3],
[4]])
To create the final 5x5 array using np.multiply (although we could use any ufunc or binary arithmetic operation), NumPy takes the 0 in the second array and mutliplies it with each elements it the first array making a row like this:
[ 0, 0, 0, 0, 0]
It then takes the second element in the second array, 1, and multiplies it with the first array, producing this row:
[ 0, 1, 2, 3, 4]
This continues until we have the final 5x5 matrix.
You could use the indices routine:
b=np.indices(a.shape)
a=b[0]+b[1]
Timings:
%%timeit
...: b=np.indices(a.shape)
...: c=b[0]+b[1]
1000 loops, best of 3: 370 µs per loop
%%timeit
for i in range(200):
for j in range(200):
a[i,j] = i + j
100 loops, best of 3: 10.4 ms per loop
Since your output matrix a is the element-wise power of N of a matrix F with elements f_ij = f(i,j) * const your code can simplify to
F = np.empty((xSize, ySize))
for i in range(xSize):
for j in range(ySize):
F[i,j] = f(i,j) * const
a = F ** n
For even more speed you can exchange the creation of the F matrix with something more efficient, given that the function f(i,j) is vectorized:
xmap, ymap = numpy.meshgrid(range(xSize), range(ySize))
F = f(xmap, ymap) * const
Related
For an numpy 1d array such as:
In [1]: A = np.array([2,5,1,3,9,0,7,4,1,2,0,11])
In [2]: A
Out[2]: array([2,5,1,3,9,0,7,4,1,2,0,11])
I need to split the array by using the values as a sub-array length.
For the example array:
The first index has a value of 2, so I need the first split to occur at index 0 + 2, so it would result in ([2,5,1]).
Skip to index 3 (since indices 0-2 were gobbled up in step 1).
The value at index 3 = 3, so the second split would occur at index 3 + 3, and result in ([3,9,0,7]).
Skip to index 7
The value at index 7 = 4, so the third and final split would occur at index 7 + 4, and result in ([4,1,2,0,11])
I'm using this simple array as an example, because I think it will help in my actual use case, which is reading data from binary files (either as bytes or unsigned shorts). I'm guessing that numpy will be the fastest way to do it, but I could also use struct/bytearray/lists or whatever would be best.
I hope this makes sense. I had a hard time trying to figure out how best to word the question.
Here is an approach using standard python lists and a while loop:
def custom_partition(arr):
partitions = []
i = 0
while i < len(arr):
pariton_size = arr[i]
next_i = i + pariton_size + 1
partitions.append(arr[i:next_i])
i = next_i
return partitions
a = [2, 5, 1, 3, 9, 0, 7, 4, 1, 2, 0, 11]
b = custom_partition(a)
print(b)
Output:
[[2, 5, 1], [3, 9, 0, 7], [4, 1, 2, 0, 11]]
I have the task of selecting p% of elements within a given numpy array. For example,
# Initialize 5 x 3 array-
x = np.random.randint(low = -10, high = 10, size = (5, 3))
x
'''
array([[-4, -8, 3],
[-9, -1, 5],
[ 9, 1, 1],
[-1, -1, -5],
[-1, -4, -1]])
'''
Now, I want to select say p = 30% of the numbers in x, so 30% of numbers in x is 5 (rounded up).
Is there a way to select these 30% of numbers in x? Where p can change and the dimensionality of numpy array x can be 3-D or maybe more.
I am using Python 3.7 and numpy 1.18.1
Thanks
You can use np.random.choice to sample without replacement from a 1d numpy array:
p = 0.3
np.random.choice(x.flatten(), int(x.size * p) , replace=False)
For large arrays, the performance of sampling without replacement can be pretty bad, but there are some workarounds.
You can randome choice 0,1 and usenp.nonzero and boolean indexing:
np.random.seed(1)
x[np.nonzero(np.random.choice([1, 0], size=x.shape, p=[0.3,0.7]))]
Output:
array([ 3, -1, 5, 9, -1, -1])
I found a way of selecting p% of numpy elements:
p = 20
# To select p% of elements-
x_abs[x_abs < np.percentile(x_abs, p)]
# To select p% of elements and set them to a value (in this case, zero)-
x_abs[x_abs < np.percentile(x_abs, p)] = 0
I have a 1D array of sorted non-unique numbers. The number of times they repeat is random.
It is associated with an array of weights with the same size. For a given series of identical elements, the associated series of weights may or may not have repeated elements as well and in this whole array of weights, there may or may not be repeated elements. E.g:
pos = np.array([3, 3, 7, 7, 9, 9, 9, 10, 10])
weights = np.array([2, 10, 20, 8, 5, 7, 15, 7, 2])
I need to extract an array of unique elements of pos, but where the unique element is the one with the greatest weight.
The working solution I came up with involves looping:
pos = np.array([3, 3, 7, 7, 9, 9, 9, 10, 10])
weights = np.array([2, 10, 20, 8, 5, 7, 15, 7, 2])
# Get the number of occurences of the elements in pos but throw away the unique array, it's not the one I want.
_, ucounts = np.unique(pos, return_counts=True)
# Initialize the output array.
unique_pos_idx = np.zeros([ucounts.size], dtype=np.uint32)
last = 0
for i in range(ucounts.size):
maxpos = np.argmax( weights[last:last+ucounts[i]] )
unique_pos_idx[i] = last + maxpos
last += ucounts[i]
# Result is:
# unique_pos_idx = [1 2 6 7]
but I’m not using much of the Python language or Numpy (apart from the use of numpy arrays) so I wonder if there is a more Pythonesque and/or more efficient solution than even a Cython version of the above?
Thanks
Here's one vectorized approach -
sidx = np.lexsort([weights,pos])
out = sidx[np.r_[np.flatnonzero(pos[1:] != pos[:-1]), -1]]
Possible improvement(s) on performance -
1] A faster way to get the sorted indices sidx with scaling -
sidx = (pos*(weights.max()+1) + weights).argsort()
2] The indexing at the end could be made faster with boolean-indexing, specially when dealing with many such intervals/groupings -
out = sidx[np.concatenate((pos[1:] != pos[:-1], [True]))]
Runtime test
All approaches :
def org_app(pos, weights):
_, ucounts = np.unique(pos, return_counts=True)
unique_pos_idx = np.zeros([ucounts.size], dtype=np.uint32)
last = 0
for i in range(ucounts.size):
maxpos = np.argmax( weights[last:last+ucounts[i]] )
unique_pos_idx[i] = last + maxpos
last += ucounts[i]
return unique_pos_idx
def vec_app(pos, weights):
sidx = np.lexsort([weights,pos])
return sidx[np.r_[np.flatnonzero(pos[1:] != pos[:-1]), -1]]
def vec_app_v2(pos, weights):
sidx = (pos*(weights.max()+1) + weights).argsort()
return sidx[np.concatenate((pos[1:] != pos[:-1], [True]))]
Timings and verification -
For the setup, let's use the sample and tile it 10000 times with scaling, as we intend to create 1000 times more number of intervals. Also, let's use unique numbers in weights, so that the argmax indices aren't confused by identical numbers :
In [155]: # Setup input
...: pos = np.array([3, 3, 7, 7, 9, 9, 9, 10, 10,])
...: pos = (pos + 10*np.arange(10000)[:,None]).ravel()
...: weights = np.random.choice(10*len(pos), size=len(pos), replace=0)
...:
...: print np.allclose(org_app(pos, weights), vec_app(pos, weights))
...: print np.allclose(org_app(pos, weights), vec_app_v2(pos, weights))
...:
True
True
In [156]: %timeit org_app(pos, weights)
...: %timeit vec_app(pos, weights)
...: %timeit vec_app_v2(pos, weights)
...:
10 loops, best of 3: 56.4 ms per loop
100 loops, best of 3: 14.8 ms per loop
1000 loops, best of 3: 1.77 ms per loop
In [157]: 56.4/1.77 # Speedup with vectorized one over loopy
Out[157]: 31.864406779661017
Assume that we are given two vectors:
A=(a₁,a₂,...,aₘ)and B=(b₁,b₂,...,bₘ)
and we need to do something for all the vectors between these two ones.
For example, for A=(1,1,0)and B=(1,2,2), all the vectors between A and B are: {(1,1,1),(1,1,2),(1,2,0),(1,2,1)}.
An obvious way to generate such vectors is using m loops (for loop), but probably it is not the best one. I would like to know if someone has some better idea.
Here's a fixed method. Returns a matrix where each row is one of the vectors of the result.
% Data
A = [0, 0, 1, 3, 5, 2]
B = [4, 8, 5, 7, 9, 6]
% Preallocate
b = cell(1,numel(A));
vec = cell(1,numel(A));
% Make a vector of values of each element of the result
for i = 1:numel(A)
vec{i} = A(i):B(i);
end
% Get all combinations using ndgrid
[b{:}] = ndgrid(vec{:});
b=cat(ndims(b{1})+1,b{:});
% Reshape the numel(A)+1 dimensional array into a 2D array
res = reshape(b,numel(b)/length(A),length(A));
I'm building a decision tree algorithm. The sorting is very expensive in this algorithm because for every split I need to sort each column. So at the beginning - even before tree construction I'm presorting variables - I'm creating a matrix so for each column in the matrix I save its ranking. Then when I want to sort the variable in some split I don't actually sort it but use the presorted ranking array. The problem is that I don't know how to do it in a space efficient manner.
A naive solution of this is below. This is only for 1 variabe (v) and 1 split (split_ind).
import numpy as np
v = np.array([60,70,50,10,20,0,90,80,30,40])
sortperm = v.argsort() #1 sortperm = array([5, 3, 4, 8, 9, 2, 0, 1, 7, 6])
rankperm = sortperm.argsort() #2 rankperm = array([6, 7, 5, 1, 2, 0, 9, 8, 3, 4])
split_ind = np.array([3,6,4,8,9]) # this is my split (random)
# split v and sortperm
v_split = v[split_ind] # v_split = array([10, 90, 20, 30, 40])
rankperm_split = rankperm[split_ind] # rankperm_split = array([1, 9, 2, 3, 4])
vsorted_dummy = np.ones(10)*-1 #3 allocate "empty" array[N]
vsorted_dummy[rankperm_split] = v_split
vsorted = vsorted_dummy[vsorted_dummy!=-1] # vsorted = array([ 10., 20., 30., 40., 90.])
Basically I have 2 questions:
Is double sorting necessary to create ranking array? (#1 and #2)
In the line #3 I'm allocating array[N]. This is very inefficent in terms of space because even if split size n << N I have to allocate whole array. The problem here is how to calculate rankperm_split. In the example original rankperm_split = [1,9,2,3,4] while it should be really [1,5,2,3,4]. This problem can be reformulated so that I want to create a "dense" integer array that has maximum gap of 1 and it keeps the ranking of the array intact.
UPDATE
I think that second point is the key here. This problem can be redefined as
A[N] - array of size N
B[N] - array of size N
I want to transform array A to array B so that:
Ranking of the elements stays the same (for each pair i,j if A[i] < A[j] then B[i] < B[j]
Array B has only elements from 1 to N where each element is unique.
A few examples of this transformation:
[3,4,5] => [1,2,3]
[30,40,50] => [1,2,3]
[30,50,40] => [1,3,2]
[3,4,50] => [1,2,3]
A naive implementation (with sorting) can be defined like this (in Python)
def remap(a):
a_ = sorted(a)
b = [a_.index(e)+1 for e in a]
return b