We have to find all possible permutations of an array WITHOUT making a copy of the array.
I know how to do it recursively using lists and I'm assuming we need to do it recursively for arrays as well. My idea was to swap the elements in the array
So something like this:
for i in range(0, len(A)):
A(i,i+1) = A(i+1,i)
But I do not know how to make a recursive function using arrays. Especially since the length of the array is unknown and swapping is using 2 elements only.
Related
I have 2 json files, I am putting them into 2 multidimentional arrays in python, and I need to compare them and return an array of elements deleted, added and updated.
PS: I want to do this without taking in consideration that I know the depth of the arrays.
Since the nested input for my array is variable, the depth of it is variable as well.
I know I can iterate through nested arrays in Ruby this way:
s.each do |sub_array|
sub_array.each do |item|
puts item
end
end
But without knowing its depth beforehand, I won't have any success this way.
Is the only possible way to write a recursive function ?
No, recursion is not the only way. It's possible to do this without recursion.
Iterating through an unknown depth nesting of arrays is exactly the same as doing a DFS walk through an unknown depth tree of nodes.
Although DFS is often represented recursively, you should be able to find a non-recursive example pretty easily.
I am running the following code:
for i in range(1000)
My_Array=numpy.concatenate((My_Array,New_Rows[i]), axis=0)
The above code is slow. Is there any faster approach?
This is basically what is happening in all algorithms based on arrays.
Each time you change the size of the array, it needs to be resized and every element needs to be copied. This is happening here too. (some implementations reserve some empty slots; e.g. doubling space of internal memory with each growing).
If you got your data at np.array creation-time, just add these all at once (memory will allocated only once then!)
If not, collect them with something like a linked list (allowing O(1) appending-operations). Then read it in your np.array at once (again only one memory allocation).
This is not much of a numpy-specific topic, but much more about data-strucures.
Edit: as this quite vague answer got some upvotes, i feel the need to make clear that my linked-list approach is one possible example. As indicated in the comment, python's lists are more array-like (and definitely not linked-lists). But the core-fact is: list.append() in python is fast (amortized: O(1)) while that's not true for numpy-arrays! There is also a small part about the internals in the docs:
How are lists implemented?
Python’s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array and the array’s length in a list head structure.
This makes indexing a list a[i] an operation whose cost is independent of the size of the list or the value of the index.
When items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don’t require an actual resize.
(bold annotations by me)
Maybe creating an empty array with the correct size and than populating it?
if you have a list of arrays with same dimensions you could
import numpy as np
arr = np.zeros((len(l),)+l[0].shape)
for i, v in enumerate(l):
arr[i] = v
works much faster for me, it only requires one memory allocation
It depends on what New_Rows[i] is, and what kind of array do you want. If you start with lists (or 1d arrays) that you want to join end to end (to make a long 1d array) just concatenate them all at once. Concatenate takes a list of any length, not just 2 items.
np.concatenate(New_Rows, axis=0)
or maybe use an intermediate list comprehension (for more flexibility)
np.concatenate([row for row in New_Rows])
or closer to your example.
np.concatenate([New_Rows[i] for i in range(1000)])
But if New_Rows elements are all the same length, and you want a 2d array, one New_Rows value per row, np.array does a nice job:
np.array(New_Rows)
np.array([i for i in New_Rows])
np.array([New_Rows[i] for i in range(1000)])
np.array is designed primarily to build an array from a list of lists.
np.concatenate can also build in 2d, but the inputs need to be 2d to start with. vstack and stack can take care of that. But all those stack functions use some sort of list comprehension followed by concatenate.
In general it is better/faster to iterate or append with lists, and apply the np.array (or concatenate) just once. appending to a list is fast; much faster than making a new array.
I think #thebeancounter 's solution is the way to go.
If you do not know the exact size of your numpy array ahead of time, you can also take an approach similar to how vector class is implemented in C++.
To be more specific, you can wrap the numpy ndarray into a new class which has a default size which is larger than your current needs. When the numpy array is almost fully populated, copy the current array to a larger one.
Assume you have a large list of 2D numpy arrays, with the same number of columns and different number of rows like this :
x = [numpy_array1(r_1, c),......,numpy_arrayN(r_n, c)]
concatenate like this:
while len(x) != 1:
if len(x) == 2:
x = np.concatenate((x[0], x[1]))
break
for i in range(0, len(x), 2):
if (i+1) == len(x):
x[0] = np.concatenate((x[0], x[i]))
else:
x[i] = np.concatenate((x[i], x[i+1]))
x = x[::2]
I came across a question while preparing for my interview.
Given an array of integers as input.
We have find a possible subset such that the elements in the array have a common difference.
For example,
Consider the input array to be {1,3,7,10,11}
Then the output should be {3,7,11}.
It is always that the elements in the array are in increasing order.
I thought of finding all the subsets and look for a solution.
But that would cause my program to run slower if the input array size is too large.
can anyone help me to crack this???
From what I understand, you want to extract possible subsets from an array such that each two consecutive numbers have the same difference value increasingly.
Here is my algorithm:
Remove duplicates.
Force arrange ascendingly.
Keep a hashtable with the difference values as keys and lists of lists as values.
Loop through the array, updating/adding a key in the hashtable that equals the difference between the two consecutive numbers at hand, and adding a list to the value (the list of lists) containing the two numbers.
After the loop, create an array. Loop through the hashtable, adding an element each time to the array which is an array itself: The merging of all nested lists in the value at hand. This is the array containing all possible subsets.
Here's a possible implementation in python:
from itertools import chain
def find_subsets (array):
table = dict()
last = array[-1]
for num in sorted (set (array), False)[1:]:
diff = last - num
table[diff].append([num, last])
last = num
return [list(chain(v)) for k, v in table]
Please try this code and correct it if wrong. I wrote this in a hurry.
I'v multiple arrays of [1x3], however I named them array1 array2 array3 and so on. What I want to create one array from all arrays such that array=array1(i,1:3) array=array2(i,4:6) and so on. How I can done this by looping or any suggestions regarding my approach, I actually want to access multiple arrays dynamic so that I'm going with this approach, any other suggestions are welcomed as I thought there will be slow computations and processing speed when my array size increases.
My Code:
for i=1:10
array(i)=array(:,i:i+3);
end
The easiest way is use cat function:
array = cat(2,array_1,array_2,array_3);
If you want to access array_i (i=1,2,3,...)
array_i = array((i-1)*3+1:i*3);
The jth index (j=1,2,3) of the array_i (i=1,2,3,4,...) can be accessed:
jth_index_of_array_i = array((i-1)*3+j)