I'v multiple arrays of [1x3], however I named them array1 array2 array3 and so on. What I want to create one array from all arrays such that array=array1(i,1:3) array=array2(i,4:6) and so on. How I can done this by looping or any suggestions regarding my approach, I actually want to access multiple arrays dynamic so that I'm going with this approach, any other suggestions are welcomed as I thought there will be slow computations and processing speed when my array size increases.
My Code:
for i=1:10
array(i)=array(:,i:i+3);
end
The easiest way is use cat function:
array = cat(2,array_1,array_2,array_3);
If you want to access array_i (i=1,2,3,...)
array_i = array((i-1)*3+1:i*3);
The jth index (j=1,2,3) of the array_i (i=1,2,3,4,...) can be accessed:
jth_index_of_array_i = array((i-1)*3+j)
Related
I have a 2D array, and have computed necessary updates along a given dimension of it using a 1D array (said updates can't be computed in place as earlier calculations would override values needed in later calculations). I thus want to copy the updates into my 2D array. The most obvious way to do this would, at first glance, appear to be to use Array slicing and Array.blit.
I have tried the approach of extracting the relevant dimension using array slicing, and then blitting across to that, but that doesn't update the values inside the 2D array. I think what is happening is that a new, separate, 1D array is being created when I make the slice, and the values are being blitted into that new array, which of course is dropped a moment later when it goes back out of scope.
I suppose you could say that I was expecting the slicing to return a view into the 2D array which would work for the blit function call, but instead the slicing actually returns a new array with the values copied into it (which, thinking about it, is what slicing does otherwise, I believe).
Currently I am using a workaround whereby I create a 2D array, where one of the dimensions is only 1 element wide (thus effectively re-creating a 1D array), and then using Array2D.blit. I would prefer to do it directly though, both because I find this ugly, and moreover because it would be quite useful elsewhere in my program where I can't just declare a 1D array as 2D.
My first approach:
let srcArray = Array.zeroCreate srcArrayLength
... // do relevant computation
srcArray.[index] <- result
... // finish computation
Array.blit srcArray 0 destArray.[index, *] 0 srcArrayLength
My current approach:
let srcArray = Array2D.zeroCreate 1 srcArrayLength
... // do relevant computation
srcArray.[0,index] <- result
... // finish computation
Array2D.blit srcArray 0 0 destArray index 0 1 srcArrayLength
The former approach has no effect on my destination 2D array. The latter approach works where I use it, but as I said above it isn't nice, and cannot be used in another situation, where I have a jagged 2D array (i.e. 'a[][]) that I would like to blit across from.
How might I go about achieiving my aim? I thought of Span/Memory, but it wasn't clear to me if and how they could be used here. Alternatively, if you can spot a better way to do this that doesn't involve blit, I'm all-virtual-ears.
I figured out a fairly good solution to this, with the help of someone over in the F# Foundation Slack. Since nobody else has posted an answer, I'll put this one up.
Both Array.Copy (note that that is the .NET Array.Copy method, not the F#-specific Array.copy) and Buffer.BlockCopy were suggested to me. Array.Copy still complains about mismatching array types, but Buffer.BlockCopy ignores the dimensionality of the supplied array, and merely copies the specified number of bytes from one location to another. Using this and relying on the fact that 2D arrays are really stored as 1D arrays in row-major order (the same as C, I believe), it is quite possible to overwrite the last dimension of a multi-dimensional array reasonably cleanly.
I updated the code from the 'current approach' in my question to the below:
let srcArray = Array.zeroCreate srcArrayLength
... //do relevant computation
srcArray.[index] <- result
... //finish computation
Buffer.BlockCopy(srcArray, 0, destArray, firstDimIndex * lengthOfSecondDim * sizeof<'a>, lengthOfSecondDim * sizeof<'a>
Not only does it do the job in a way which I personally find a bit tidier, but it has a side-benefit in that it is noticeably faster than the second approach described in the question - I haven't yet run a benchmark to quantify the difference though.
I am running the following code:
for i in range(1000)
My_Array=numpy.concatenate((My_Array,New_Rows[i]), axis=0)
The above code is slow. Is there any faster approach?
This is basically what is happening in all algorithms based on arrays.
Each time you change the size of the array, it needs to be resized and every element needs to be copied. This is happening here too. (some implementations reserve some empty slots; e.g. doubling space of internal memory with each growing).
If you got your data at np.array creation-time, just add these all at once (memory will allocated only once then!)
If not, collect them with something like a linked list (allowing O(1) appending-operations). Then read it in your np.array at once (again only one memory allocation).
This is not much of a numpy-specific topic, but much more about data-strucures.
Edit: as this quite vague answer got some upvotes, i feel the need to make clear that my linked-list approach is one possible example. As indicated in the comment, python's lists are more array-like (and definitely not linked-lists). But the core-fact is: list.append() in python is fast (amortized: O(1)) while that's not true for numpy-arrays! There is also a small part about the internals in the docs:
How are lists implemented?
Python’s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array and the array’s length in a list head structure.
This makes indexing a list a[i] an operation whose cost is independent of the size of the list or the value of the index.
When items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don’t require an actual resize.
(bold annotations by me)
Maybe creating an empty array with the correct size and than populating it?
if you have a list of arrays with same dimensions you could
import numpy as np
arr = np.zeros((len(l),)+l[0].shape)
for i, v in enumerate(l):
arr[i] = v
works much faster for me, it only requires one memory allocation
It depends on what New_Rows[i] is, and what kind of array do you want. If you start with lists (or 1d arrays) that you want to join end to end (to make a long 1d array) just concatenate them all at once. Concatenate takes a list of any length, not just 2 items.
np.concatenate(New_Rows, axis=0)
or maybe use an intermediate list comprehension (for more flexibility)
np.concatenate([row for row in New_Rows])
or closer to your example.
np.concatenate([New_Rows[i] for i in range(1000)])
But if New_Rows elements are all the same length, and you want a 2d array, one New_Rows value per row, np.array does a nice job:
np.array(New_Rows)
np.array([i for i in New_Rows])
np.array([New_Rows[i] for i in range(1000)])
np.array is designed primarily to build an array from a list of lists.
np.concatenate can also build in 2d, but the inputs need to be 2d to start with. vstack and stack can take care of that. But all those stack functions use some sort of list comprehension followed by concatenate.
In general it is better/faster to iterate or append with lists, and apply the np.array (or concatenate) just once. appending to a list is fast; much faster than making a new array.
I think #thebeancounter 's solution is the way to go.
If you do not know the exact size of your numpy array ahead of time, you can also take an approach similar to how vector class is implemented in C++.
To be more specific, you can wrap the numpy ndarray into a new class which has a default size which is larger than your current needs. When the numpy array is almost fully populated, copy the current array to a larger one.
Assume you have a large list of 2D numpy arrays, with the same number of columns and different number of rows like this :
x = [numpy_array1(r_1, c),......,numpy_arrayN(r_n, c)]
concatenate like this:
while len(x) != 1:
if len(x) == 2:
x = np.concatenate((x[0], x[1]))
break
for i in range(0, len(x), 2):
if (i+1) == len(x):
x[0] = np.concatenate((x[0], x[i]))
else:
x[i] = np.concatenate((x[i], x[i+1]))
x = x[::2]
We have to find all possible permutations of an array WITHOUT making a copy of the array.
I know how to do it recursively using lists and I'm assuming we need to do it recursively for arrays as well. My idea was to swap the elements in the array
So something like this:
for i in range(0, len(A)):
A(i,i+1) = A(i+1,i)
But I do not know how to make a recursive function using arrays. Especially since the length of the array is unknown and swapping is using 2 elements only.
I'm working to convert some of my ObjC code that uses primitive c arrays to Swift arrays. However, using a playground, I've found some strange behaviors.
For instance, the following is perfectly valid in Swift
var largearray : [[Float]] = []
largearray.append([0,1,2]) //3 elements
largearray.append([3,4,5]) //3 elements
largearray.append([6,7,8,9]) //-4- elements
largearray.append([10,11,12]) //3 elements
//pull those back out
largearray[1][0] //gives 3
largearray[1][2] //gives 5
//largearray[1][3] //error
largearray[2][0] //gives 6
largearray[2][2] //gives 8
largearray[2][3] //gives 9
largearray[3][0] //gives 10
I don't understand how it's possible to have a mixed row lengths is Swift. Can someone explain what's going on here, because the documentation doesn't go into that kind of detail. I'm curious if it is even storing a contiguous Float array behind the scenes or not.
Then another question I have is about accessing rows or columns. In Swift I see that I can access an entire row using largearray[0] gives [0,1,2], just as largearray[2] gives [6,7,8,9]. Which isn't how c arrays are indexed. (If I just specified one index for a 2D c-array, it would act as a sequential index row by column. So, is there some way to access an entire column in swift? In c, and Swift, largearray[][2] is invalid. But I'm curious if there is some technique not mentioned in the docs, since it seems obvious that Swift is keeping track of extra information.
I should add that I will be making use of the Accelerate framework. So if any of the above "strange" ways of using a Swift array will cause performance issues on massive arrays, let me know.
How are Swift arrays different than c arrays
In C, an array is always a contiguous list of elements. In Swift, an array is a much more abstract data structure. You can make assumptions about how data is organized in memory with a C array, and you can even calculate the addresses of an element given the base address, element size, and an index. In Swift, not so much. Think of Swift's Array type the same way you think of NSArray in Objective-C. It's an ordered sequence of elements that provides array-like operations, but you shouldn't worry about how it stores the actual data.
I don't understand how it's possible to have a mixed row lengths is Swift.
Well, for one thing, you're really looking at an array of arrays. If an array is an object, then an array of arrays is probably implemented as an list of object pointers rather than a contiguous series of same-sized lists. You can do the same thing with NSArray, for example, because each item in an NSArray can be an object of any type.
So, is there some way to access an entire column in swift?
You'd need to iterate over the items in the array, which are themselves arrays, and examine the element at the "column" position you're interested in. I don't think there's a faster way to do it than that.
I am looking for a way to store a large variable number of matrixes in an array in MATLAB.
Are there any ways to achieve this?
Example:
for i: 1:unknown
myArray(i) = zeros(500,800);
end
Where unknown is the varied length of the array, I can revise with additional info if needed.
Update:
Performance is the main reason I am trying to accomplish this. I had it before where it would grab the data as a single matrix, show it in real time and then proceed to process the next set of data.
I attempted it using multidimensional arrays as suggested below by Rocco, however my data is so large that I ran out of Memory, I might have to look into another alternative for my case. Will update as I attempt other suggestions.
Update 2:
Thank you all for suggestions, however I should have specified beforehand, precision AND speed are both an integral factor here, I may have to look into going back to my original method before trying 3-d arrays and re-evaluate the method for importing the data.
Use cell arrays. This has an advantage over 3D arrays in that it does not require a contiguous memory space to store all the matrices. In fact, each matrix can be stored in a different space in memory, which will save you from Out-of-Memory errors if your free memory is fragmented. Here is a sample function to create your matrices in a cell array:
function result = createArrays(nArrays, arraySize)
result = cell(1, nArrays);
for i = 1 : nArrays
result{i} = zeros(arraySize);
end
end
To use it:
myArray = createArrays(requiredNumberOfArrays, [500 800]);
And to access your elements:
myArray{1}(2,3) = 10;
If you can't know the number of matrices in advance, you could simply use MATLAB's dynamic indexing to make the array as large as you need. The performance overhead will be proportional to the size of the cell array, and is not affected by the size of the matrices themselves. For example:
myArray{1} = zeros(500, 800);
if twoRequired, myArray{2} = zeros(500, 800); end
If all of the matrices are going to be the same size (i.e. 500x800), then you can just make a 3D array:
nUnknown; % The number of unknown arrays
myArray = zeros(500,800,nUnknown);
To access one array, you would use the following syntax:
subMatrix = myArray(:,:,3); % Gets the third matrix
You can add more matrices to myArray in a couple of ways:
myArray = cat(3,myArray,zeros(500,800));
% OR
myArray(:,:,nUnknown+1) = zeros(500,800);
If each matrix is not going to be the same size, you would need to use cell arrays like Hosam suggested.
EDIT: I missed the part about running out of memory. I'm guessing your nUnknown is fairly large. You may have to switch the data type of the matrices (single or even a uintXX type if you are using integers). You can do this in the call to zeros:
myArray = zeros(500,800,nUnknown,'single');
myArrayOfMatrices = zeros(unknown,500,800);
If you're running out of memory throw more RAM in your system, and make sure you're running a 64 bit OS. Also try reducing your precision (do you really need doubles or can you get by with singles?):
myArrayOfMatrices = zeros(unknown,500,800,'single');
To append to that array try:
myArrayOfMatrices(unknown+1,:,:) = zeros(500,800);
I was doing some volume rendering in octave (matlab clone) and building my 3D arrays (ie an array of 2d slices) using
buffer=zeros(1,512*512*512,"uint16");
vol=reshape(buffer,512,512,512);
Memory consumption seemed to be efficient. (can't say the same for the subsequent speed of computations :^)
if you know what unknown is,
you can do something like
myArray = zeros(2,2);
for i: 1:unknown
myArray(:,i) = zeros(x,y);
end
However it has been a while since I last used matlab.
so this page might shed some light on the matter :
http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/matlab_prog/f1-86528.html
just do it like this
x=zeros(100,200);
for i=1:100
for j=1:200
x(i,j)=input('enter the number');
end
end