A question that involves permutations of pairs of row elements - arrays

Consider two numpy arrays of integers. U has 2 columns and shows all (p,q) where p<q. For this question, I'll restrict myself to 0<=p,q<=5. The cardinality of U is C(6,2) = 15.
U = [[0,1],
[0,2],
[0,3],
[0,4],
[0,5],
[1,2],
[1,3],
[1,4],
[1,5],
[2,3],
[2,4],
[2,5],
[3,4],
[3,5],
[4,5]]
The 2nd array, V, has 6 columns. I formed it by finding the cartesian product UxUxU. So, the first row of V is [0,1,0,1,0,1], and the last row is [4,5,4,5,4,5]. The cardinality of V is C(6,2)^3 = 3375.
A SMALL SAMPLE of V, used in my question, is shown below. The elements of each row should be thought of as 3 pairs. The rationale follows.
V = [[0,1, 2,5, 2,4],
[0,1, 2,5, 2,5],
[0,1, 2,5, 3,4],
[0,1, 2,5, 3,5],
[0,1, 2,5, 4,0],
[0,1, 2,5, 4,1]]
Here's why the row elements should be thought of as a set of 3 pairs: Later in my code, I will loop through each row of V, using the pair values to 'swap' columns of a matrix M. (M is not shown because it isn't needed for this question) When we get to row [0,1, 2,5, 2,4], for example, we will swap the columns of M having indices 0 & 1, THEN swap the columns having indices 2 & 5, and finally, swap the columns having indices 2 & 4.
I'm currently wasting a lot of time because many of the rows of V could be eliminated.
The easiest case to understand involves V rows like [0,1, 2,5, 3,4] where all values are unique. This row has 6 pair permutations, but they all have the same net effect on M. Their values are unique, so none of the swaps will encounter 'interference' from another swap.
Question 1: How can I efficiently eliminate rows that have unique elements in unneeded permutations?
I would keep, say, [0,1, 2,5, 3,4], but drop:
[0,1, 3,4, 2,5],
[2,5, 0,1, 3,4],
[2,5, 3,4, 0,1],
[3,4, 0,1, 2,5],
[3,4, 2,5, 0,1]
I'm guessing a solution would involve np.sort and np.unique, but I'm struggling with getting a good result.
Question 2: (I don't think it's reasonable to expect an answer to this question, but I'd certainly appreciate any pointers or tips re resources that I could study) The question involves rows of V having one or more common elements, like [0,1, 2,5, 2,4] or [0,5, 2,5, 2,4] or [0,5, 2,5, 3,5]. All of these have 6 pair permutations, but they don't all have the same effect of M. The row [0,1, 2,5, 2,4], for example, has 3 permutations that produce one M outcome, and 3 permutations that produce another. Ideally, I would like to keep two of the rows but eliminate the other four. The two other rows I showed are even more 'pathological'.
Does anyone see a path forward here that would allow more eliminations of V rows? If not, I'll continue what I'm currently doing even though it's really inefficient - screening the code's final outputs for doubles.

To get rows of an array, without repetitions (in your sense), you can run:
VbyRows = V[np.lexsort(V[:, ::-1].T)]
sorted_data = np.sort(VbyRows, axis=1)
result = VbyRows[np.append([True], np.any(np.diff(sorted_data, axis=0), 1))]
Details:
VbyRows = V[np.lexsort(V[:, ::-1].T)] - sort rows by all columns.
I used ::-1 as the column index to sort first on the first column,
then by the second, and so on.
sorted_data = np.sort(VbyRows, axis=1) - sort each row from VbyRows
(and save it as a separate array).
np.diff(sorted_data, axis=0) - compute "vertical" differences between
previous and current row (in sorted_data).
np.any(...) - A bool vector - "cumulative difference indicator" for
each row from sorted_data but the first (does it differ from the
previous row on any position).
np.append([True], ...) - prepend the above result with True (an
indicator that the first row should be included in the result).
The result is also a bool vector, this time for all rows. Each element
of this row answers the question: Should the respective row from VbyRows
be included in the result.
result = VbyRows[np.append([True], np.any(np.diff(sorted_data, axis=0), 1))] -
the final result.
To test the above code I prepared V as follows:
array([[ 0, 1, 2, 5, 3, 4],
[ 0, 1, 3, 4, 2, 5],
[ 2, 5, 0, 1, 3, 4],
[ 2, 5, 3, 4, 0, 1],
[ 3, 4, 0, 1, 2, 5],
[13, 14, 12, 15, 10, 11],
[ 3, 4, 2, 5, 0, 1]])
(the last but one row is "other", all remaining rows contain the same
numbers in various order).
The result is:
array([[ 0, 1, 2, 5, 3, 4],
[13, 14, 12, 15, 10, 11]])
Note that lexsort as the first step provides that from rows with
the same set of numbers the returned row will be the first from rows
sorted by consecutive columns.

Related

Slicing numpy arrays

mean = [0, 0]
cov = [[1, 0], [0, 100]]
gg = np.random.multivariate_normal(mean, cov, size = [5, 12])
I get an array which has 2 columns and 12 rows, i want to take the first column which will include all 12 rows and convert them to columns. What is the appropriate method for sclicing and how can one convet the result to columns? To be precise, looking at the screen (the second one) one should take all 0 column columns and convert them in a normal way from the left to the right
the results should be like this (the first screen)
The problem is that your array gg is not two- but three-dimensional. So, what you need is in fact the first column of each stacked 2D array. Here is an example:
import numpy as np
x = np.random.randint(0, 10, (3, 4, 5))
x[:, :, 0].flatten()
The colon in slicing means "all values in this dimension". So, x[:, :, 0] means "all values in the the first dimension and all values in the second dimension and with third dimension fixed on index 0". This results in a two-dimensional array, which you have to flatten additionally.

Searching 2D numpy array of ids for partial id

I have the following 2 arrays:
arr = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[7, 5, 6, 3],
[2, 4, 8, 9]]
ids = np.array([6, 5])
Each row in the array arr describes a 4-digit id, there are no redundant ids - neither in their values nor their combination. So if [1, 2, 3, 4] exists, no other combination of these 4 digits can exist. This will be important in a sec.
The array ids contains a 2-digit id, however the order is random. Now I need to go through each row of arr and look if this 2-digit partial id part of any 4-digit id. In this example ids fits to the 2nd and 3rd row from the top of arr.
My current solution with np.isin only works if the array ids has also a 4-digit row.
arr[np.isin(arr, ids).all(1)]
Changing all(1) to any(1) doesn´t do the trick either, because this way it would be enough if just one digit of ids is in one row of arr, however I need both values.
Does anyone have a compact solution?
Just need the boolean index to only accept values that are 2. When doing non-boolean operations like sum with boolean arrays, True and False values are interpreted as 1 and 0
arr[np.isin(arr, ids).sum(1) == 2]

How can I create a function that combines list/array rows/columns/elements in arbitrary sized array/list?

Afternoon. I'm currently trying to create a function(s) that, when given an array or list and a specified selection of columns/rows/elements, the specified columns/rows/etc are removed and concatenated into a array/list-much in this fashion (but for arbitrary sized objects that may or may not be pretty big)
a = [1 2 3 b=['a','b','c'
4 5 6 'd','e','f'
7 8 9] 'g','h','i']
Now, let's say I want the 1st, and third columns. Then this would look like
a'=[1 3 b'=['a', 'c'
4 6 'd', 'f'
7 9] 'g', 'i]
I'm familiar with slicing indices and extracting them using numpy-so I guess where I'm really hung up is creating some object (a list or array of arrays/lists?) that contains columns/whatever (in the above i choose the first and third columns, as you can see) and then iterating over that object to create a concatenated/combined list of what I've specified(i.e.-If I'm given an array with 127 variables and I want to exact an arbitrary amount of arbitrary columns at a given time)
Thanks for taking a look. Let me know how to update the op if anything is unclear.
How is this different from advanced indexing
In [324]: A = np.arange(12).reshape(2,6)
In [325]: A
Out[325]:
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11]])
In [326]: A[:,[1,2,4]]
Out[326]:
array([[ 1, 2, 4],
[ 7, 8, 10]])
To select both rows and columns you have to pay attention to index broadcasting:
In [327]: A = np.arange(24).reshape(4,6)
In [328]: A[[[1],[3]], [1,2,4]] # column index and row index
Out[328]:
array([[ 7, 8, 10],
[19, 20, 22]])
In [329]: A[np.ix_([1,3], [1,2,4])] # easier with ix_()
Out[329]:
array([[ 7, 8, 10],
[19, 20, 22]])
https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#purely-integer-array-indexing
The index arrays/lists can be assigned to variables - the input the the A indexing can be a tuple.
In [330]: idx = [[1,3],[1,2,4]]
In [331]: idx1 = np.ix_(*idx)
In [332]: idx1
Out[332]:
(array([[1],
[3]]), array([[1, 2, 4]]))
In [333]: A[idx1]
Out[333]:
array([[ 7, 8, 10],
[19, 20, 22]])
And to expand a set of slices and indices into single array, np.r_ is handy (though not magical):
In [335]: np.r_[slice(0,5),7,6, 3:6]
Out[335]: array([0, 1, 2, 3, 4, 7, 6, 3, 4, 5])
There are other indexing tools, utilities in indexing_tricks, functions like np.delete and np.take.
Try np.source(np.delete) to see how that handles general purpose deletion.
You could use a double list comprehension
>>> def select(arr, rows, cols):
... return [[el for j, el in enumerate(row) if j in cols] for i, row in enumerate(arr) if i in rows]
...
>>> select([[1,2,3,4],[5,6,7,8],[9,10,11,12]],(0,2),(1,3))
[[2, 4], [10, 12]]
>>>
please note that, independent of the order of indices in rows and cols,
select doesn't reorder the rows and columns of the input, note also that
using the same index repeatedly in either rows or cols does not give you duplicated rows or columns. Eventually note that select works only for lists of lists.
That said I advise you in favor of numpy that's hugely more flexible and
extremely more efficient.

Efficient way of finding sequential numbers across multiple arrays?

I'm not looking for any code or having anything being done for me. I need some help to get started in the right direction but do not know how to go about it. If someone could provide some resources on how to go about solving these problems I would very much appreciate it. I've sat with my notebook and am having trouble designing an algorithm that can do what I'm trying to do.
I can probably do:
foreach element in array1
foreach element in array2
check if array1[i] == array2[j]+x
I believe this would work for both forward and backward sequences, and for the multiples just check array1[i] % array2[j] == 0. I have a list which contains int arrays and am getting list[index] (for array1) and list[index+1] for array2, but this solution can get complex and lengthy fast, especially with large arrays and a large list of those arrays. Thus, I'm searching for a better solution.
I'm trying to come up with an algorithm for finding sequential numbers in different arrays.
For example:
[1, 5, 7] and [9, 2, 11] would find that 1 and 2 are sequential.
This should also work for multiple sequences in multiple arrays. So if there is a third array of [24, 3, 15], it will also include 3 in that sequence, and continue on to the next array until there isn't a number that matches the last sequential element + 1.
It also should be able to find more than one sequence between arrays.
For example:
[1, 5, 7] and [6, 3, 8] would find that 5 and 6 are sequential and also 7 and 8 are sequential.
I'm also interested in finding reverse sequences.
For example:
[1, 5, 7] and [9, 4, 11]would return 5 and 4 are reverse sequential.
Example with all:
[1, 5, 8, 11] and [2, 6, 7, 10] would return 1 and 2 are sequential, 5 and 6 are sequential, 8 and 7 are reverse sequential, 11 and 10 are reverse sequential.
It can also overlap:
[1, 5, 7, 9] and [2, 6, 11, 13] would return 1 and 2 sequential, 5 and 6 sequential and also 7 and 6 reverse sequential.
I also want to expand this to check numbers with a difference of x (above examples check with a difference of 1).
In addition to all of that (although this might be a different question), I also want to check for multiples,
Example:
[5, 7, 9] and [10, 27, 8] would return 5 and 10 as multiples, 9 and 27 as multiples.
and numbers with the same ones place.
Example:
[3, 5, 7] and [13, 23, 25] would return 3 and 13 and 23 have the same ones digit.
Use a dictionary (set or hashmap)
dictionary1 = {}
Go through each item in the first array and add it to the dictionary.
[1, 5, 7]
Now dictionary1 = {1:true, 5:true, 7:true}
dictionary2 = {}
Now go through each item in [6, 3, 8] and lookup if it's part of a sequence.
6 is part of a sequence because dictionary1[6+1] == true
so dictionary2[6] = true
We get dictionary2 = {6:true, 8:true}
Now set dictionary1 = dictionary2 and dictionary2 = {}, and go to the third array.. and so on.
We only keep track of sequences.
Since each lookup is O(1), and we do 2 lookups per number, (e.g. 6-1 and 6+1), the total is n*O(1) which is O(N) (N is the number of numbers across all the arrays).
The brute force approach outlined in your pseudocode will be O(c^n) (exponential), where c is the average number of elements per array and n is the number of total arrays.
If the input space is sparse (meaning there will be more missing numbers on average than presenting numbers), then one way to speed up this process is to first create a single sorted set of all the unique numbers from all your different arrays. This "master" set will then allow you to early exit (i.e. break statements in your loops) on any sequences which are not viable.
For example, if we have input arrays [1, 5, 7] and [6, 3, 8] and [9, 11, 2], the master ordered set would be {1, 2, 3, 5, 6, 7, 8, 9, 11}. If we are looking for n+1 type sequences, we could skip ever continuing checking any sequence that contains a 3 or 9 or 11 (because the n+1 value in not present at the next index in the sorted set. While the speedups are not drastic in this particular example, if you have hundreds of input arrays and very large range of values for n (sparsity), then the speedups should be exponential because you will be able to early exit on many permutations. If the input space is not sparse (such as in this example where we didn't have many holes), the speedups will be less than exponential.
A further improvement would be to store a "master" set of key-value pairs, where the key is the n value as shown in the example above, and the value portion of the pair is a list of the indices of any arrays that contain that value. The master set of the previous example would then be: {[1, 0], [2, 2], [3, 1], [5, 0], [6, 1], [7, 0], [8, 1], [9, 2], [11, 2]}. With this architecture, scan time could potentially be as low as O(c*n), because you could just traverse this single sorted master set looking for valid sequences instead of looping over all the sub-arrays. By also requiring the array indexes to increment, you can clearly see that the 1->2 sequence can be skipped because the arrays are not in the correct order, and the same with the 2->3 sequence, etc. Note this toy example is somewhat oversimplified because in practice you would need a list of indices for the value portions of the key-value pairs. This would be necessary if the same value of n ever appeared in multiple arrays (duplicate values).

Calling Groups of Elements of Matlab Arrays

I'm dealing with long daily time series in Matlab, running over periods of 30-100+ years. I've been meaning to start looking at it by seasons, roughly approximating that by taking 91-day segments of each year over the time period (with some tbd method of correcting for odd number of days in the year)
Basically, what I want is an array indexing method that allows me to make a new array that takes 91 elements every 365 elements, starting at element 1. I've been looking for some normal array methods (some (:) or other), but I haven't been able to find one. I guess an alternative would be to kind of iterate over 365-day segments 91 times, but that seems needlessly complicated.
Is there a simpler way that I've missed?
Thanks in advance for the help!
So if I understand correctly, you want to extract elements 1-91, 366-457, 731-822, and so on? I'm not sure that there is a way to do this with basic matrix indexing, but you can do the following:
days = 1:365; %Create array ranging from 1 - 365
difference = length(data) - 365; %how much bigger is time series data?
padded = padarray(days, [0, difference], 'circular'); %extend to fit time series
extracted = data(padded <= 91); %get every element in the range 1-91
Basically what I am doing is creating an array that is the same size as your time series data that repeats 1-365 over and over. I then perform logical indexing on data, such that the padded array is less than or equal to 91.
As a more approachable example, consider:
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
days = 1:5;
difference = length(x) - 5;
padded = padarray(days, [0, difference], 'circular');
extracted = x(padded <= 2);
padded then is equal to [1, 2, 3, 4, 5, 1, 2, 3, 4, 5] and extracted is going to be [1, 2, 6, 7]

Resources