I have come accross a line code that actually works for the work I am doing but I do not understand it. I would like someone to please explain what it means.
b=(3,1,2,1)
a=2
q=np.zeros(b+(a,))
I would like to know why length of q is always the first entry of b.
for example len(q)=3
if b=(1,2,4,3) then len(q)=1
This is really confusing as I thought that the function 'len' returns the number of columns of a given array. Also, how do I get the number of rows of q. So far the only specifications I have found are len(q), q.size( which gives the total number of elements in q) and q.shape(which also I do not quite get the output, because in the latter case, q.shape=(b,a)=(1,2,4,3,2).
Is there function that could return the size of the array in terms of the numberof columns and rows? for example 24x2?
Thank you in advance.
In Python a array does only have one dimension, that's why len(array) returns a single number.
Assuming that you have a 'matrix' in form of array of arrays, like this:
1 2 3
4 5 6
7 8 9
declared like
mat = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
you can determine the 'number of columns and rows' by the following commands:
rows = len(mat)
columns = len(mat[0])
Note that it only works if number of elements in each row is constant
If you are using numpy to make the arrays, another way to get the column rows and columns is using the tuple from the np.shape() function. Here is a complete example:
import numpy as np
mat = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
rownum = np.shape(mat)[0]
colnum = np.shape(mat)[1]
Related
mean = [0, 0]
cov = [[1, 0], [0, 100]]
gg = np.random.multivariate_normal(mean, cov, size = [5, 12])
I get an array which has 2 columns and 12 rows, i want to take the first column which will include all 12 rows and convert them to columns. What is the appropriate method for sclicing and how can one convet the result to columns? To be precise, looking at the screen (the second one) one should take all 0 column columns and convert them in a normal way from the left to the right
the results should be like this (the first screen)
The problem is that your array gg is not two- but three-dimensional. So, what you need is in fact the first column of each stacked 2D array. Here is an example:
import numpy as np
x = np.random.randint(0, 10, (3, 4, 5))
x[:, :, 0].flatten()
The colon in slicing means "all values in this dimension". So, x[:, :, 0] means "all values in the the first dimension and all values in the second dimension and with third dimension fixed on index 0". This results in a two-dimensional array, which you have to flatten additionally.
I have the following 2 arrays:
arr = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[7, 5, 6, 3],
[2, 4, 8, 9]]
ids = np.array([6, 5])
Each row in the array arr describes a 4-digit id, there are no redundant ids - neither in their values nor their combination. So if [1, 2, 3, 4] exists, no other combination of these 4 digits can exist. This will be important in a sec.
The array ids contains a 2-digit id, however the order is random. Now I need to go through each row of arr and look if this 2-digit partial id part of any 4-digit id. In this example ids fits to the 2nd and 3rd row from the top of arr.
My current solution with np.isin only works if the array ids has also a 4-digit row.
arr[np.isin(arr, ids).all(1)]
Changing all(1) to any(1) doesn´t do the trick either, because this way it would be enough if just one digit of ids is in one row of arr, however I need both values.
Does anyone have a compact solution?
Just need the boolean index to only accept values that are 2. When doing non-boolean operations like sum with boolean arrays, True and False values are interpreted as 1 and 0
arr[np.isin(arr, ids).sum(1) == 2]
I need to develop an algorithm which would accept two numbers m and n - dimensions of 2D array - as input and generate 2D array filled with numbers [1..m*n] with the following condition:
All (4) elements adjacent to a given element cannot be equal to currentElement + 1
Adjacent elements are located to the two/three/four sides (depending on position) of a given element
0 1 0
1 2 1
0 1 0
(E.g four 1s are adjacent to 2)
Example:
Input: m = 3, n = 3 (does not essentially have to be square matrix)
(Sample) output:
[
[7, 2, 5],
[1, 6, 9],
[3, 8, 4]
]
Note that there apparently may exist more than one possible output. In that case, numbers in the array have to be generated randomly (though still meeting the conditions), not following any preset sequence (e.g not [ [1, 3, 5], [4, 6, 2], [7, 9, 8] ] because it clearly uses a non-randomly generated sequence of numbers, odds first, then evens, etc)
Basically, for the same input, on two different occasions, two different arrays should be generated.
P.S: that was a coding interview question and I wonder how I could solve it, so, any help is highly appreciated.
I'm not looking for any code or having anything being done for me. I need some help to get started in the right direction but do not know how to go about it. If someone could provide some resources on how to go about solving these problems I would very much appreciate it. I've sat with my notebook and am having trouble designing an algorithm that can do what I'm trying to do.
I can probably do:
foreach element in array1
foreach element in array2
check if array1[i] == array2[j]+x
I believe this would work for both forward and backward sequences, and for the multiples just check array1[i] % array2[j] == 0. I have a list which contains int arrays and am getting list[index] (for array1) and list[index+1] for array2, but this solution can get complex and lengthy fast, especially with large arrays and a large list of those arrays. Thus, I'm searching for a better solution.
I'm trying to come up with an algorithm for finding sequential numbers in different arrays.
For example:
[1, 5, 7] and [9, 2, 11] would find that 1 and 2 are sequential.
This should also work for multiple sequences in multiple arrays. So if there is a third array of [24, 3, 15], it will also include 3 in that sequence, and continue on to the next array until there isn't a number that matches the last sequential element + 1.
It also should be able to find more than one sequence between arrays.
For example:
[1, 5, 7] and [6, 3, 8] would find that 5 and 6 are sequential and also 7 and 8 are sequential.
I'm also interested in finding reverse sequences.
For example:
[1, 5, 7] and [9, 4, 11]would return 5 and 4 are reverse sequential.
Example with all:
[1, 5, 8, 11] and [2, 6, 7, 10] would return 1 and 2 are sequential, 5 and 6 are sequential, 8 and 7 are reverse sequential, 11 and 10 are reverse sequential.
It can also overlap:
[1, 5, 7, 9] and [2, 6, 11, 13] would return 1 and 2 sequential, 5 and 6 sequential and also 7 and 6 reverse sequential.
I also want to expand this to check numbers with a difference of x (above examples check with a difference of 1).
In addition to all of that (although this might be a different question), I also want to check for multiples,
Example:
[5, 7, 9] and [10, 27, 8] would return 5 and 10 as multiples, 9 and 27 as multiples.
and numbers with the same ones place.
Example:
[3, 5, 7] and [13, 23, 25] would return 3 and 13 and 23 have the same ones digit.
Use a dictionary (set or hashmap)
dictionary1 = {}
Go through each item in the first array and add it to the dictionary.
[1, 5, 7]
Now dictionary1 = {1:true, 5:true, 7:true}
dictionary2 = {}
Now go through each item in [6, 3, 8] and lookup if it's part of a sequence.
6 is part of a sequence because dictionary1[6+1] == true
so dictionary2[6] = true
We get dictionary2 = {6:true, 8:true}
Now set dictionary1 = dictionary2 and dictionary2 = {}, and go to the third array.. and so on.
We only keep track of sequences.
Since each lookup is O(1), and we do 2 lookups per number, (e.g. 6-1 and 6+1), the total is n*O(1) which is O(N) (N is the number of numbers across all the arrays).
The brute force approach outlined in your pseudocode will be O(c^n) (exponential), where c is the average number of elements per array and n is the number of total arrays.
If the input space is sparse (meaning there will be more missing numbers on average than presenting numbers), then one way to speed up this process is to first create a single sorted set of all the unique numbers from all your different arrays. This "master" set will then allow you to early exit (i.e. break statements in your loops) on any sequences which are not viable.
For example, if we have input arrays [1, 5, 7] and [6, 3, 8] and [9, 11, 2], the master ordered set would be {1, 2, 3, 5, 6, 7, 8, 9, 11}. If we are looking for n+1 type sequences, we could skip ever continuing checking any sequence that contains a 3 or 9 or 11 (because the n+1 value in not present at the next index in the sorted set. While the speedups are not drastic in this particular example, if you have hundreds of input arrays and very large range of values for n (sparsity), then the speedups should be exponential because you will be able to early exit on many permutations. If the input space is not sparse (such as in this example where we didn't have many holes), the speedups will be less than exponential.
A further improvement would be to store a "master" set of key-value pairs, where the key is the n value as shown in the example above, and the value portion of the pair is a list of the indices of any arrays that contain that value. The master set of the previous example would then be: {[1, 0], [2, 2], [3, 1], [5, 0], [6, 1], [7, 0], [8, 1], [9, 2], [11, 2]}. With this architecture, scan time could potentially be as low as O(c*n), because you could just traverse this single sorted master set looking for valid sequences instead of looping over all the sub-arrays. By also requiring the array indexes to increment, you can clearly see that the 1->2 sequence can be skipped because the arrays are not in the correct order, and the same with the 2->3 sequence, etc. Note this toy example is somewhat oversimplified because in practice you would need a list of indices for the value portions of the key-value pairs. This would be necessary if the same value of n ever appeared in multiple arrays (duplicate values).
I have an numpy 2D array and I want it to return coloumn c where (r, c-1) (row r, coloumn c) equals a certain value (int n).
I don't want to iterate over the rows writing something like
for r in len(rows):
if array[r, c-1] == 1:
store array[r,c]
, because there are 4000 of them and this 2D array is just one of 20 i have to look trough.
I found "filter" but don't know how to use it (Found no doc).
Is there an function, that provides such a search?
I hope I understood your question correctly. Let's say you have an array a
a = array(range(7)*3).reshape(7, 3)
print a
array([[0, 1, 2],
[3, 4, 5],
[6, 0, 1],
[2, 3, 4],
[5, 6, 0],
[1, 2, 3],
[4, 5, 6]])
and you want to extract all lines where the first entry is 2. This can be done like this:
print a[a[:,0] == 2]
array([[2, 3, 4]])
a[:,0] denotes the first column of the array, == 2 returns a Boolean array marking the entries that match, and then we use advanced indexing to extract the respective rows.
Of course, NumPy needs to iterate over all entries, but this will be much faster than doing it in Python.
Numpy arrays are not indexed. If you need to perform this specific operation more effeciently than linear in the array size, then you need to use something other than numpy.