I have created a numpy array with (16,19) dimension. I would like to fill it with ones in the first row, 2 in the second row, 3 in the third row and so on until I reach the last row.
I am very new in Python so that´s why I don´t probabably understand very well how it works. This is what I have tried until now:
arboles=np.zeros((16,19),dtype=np.int16)
for i in arboles:
count=0
arboli = arboles[1,:] == 1
arboles=count+1
I am probably missing some order in the middle where I ask Numpy to write the numbers in the empty array. Any help please?
If you would like each row to be filled with the row's id in the array, you can do:
arboles=np.zeros((16,19),dtype=np.int16)
for row in range(0, arboles.shape[0]):
arboles[row, :] = row
Related
I have a 99x1 cell array and I would like to convert it into a 33x3 cell array for example.
I would like the first 3 rows of the 99x1 cell array to make up the first row in the 33x3 cell array and then the 3rd through 6th row in the 99x1 cell array to make up the second row in the 33x3 cell array.
I also need the data when being reshaped to go over column by column before it goes down. For example I would need:
1
2
3
4
to become
1, 2; 3, 4
not
1, 3; 2, 4
Help with this would be greatly appreciated
You can simply use the reshape-function. Since reshape(yourcell,[],3) would first fill the first column and then the second and so on instead of row-wise, you will need to combine it with the transpose operator .':
newcell=reshape(yourcell,3,[]).'
This way, you will first create a 3x33 cell using the reshape and then transform it into the desired 33x3 cell. The [] tells reshape to create as many columns as needed.
I'm trying to find a succint way of going over all rows and all columns of a numpy array, and deleting a row or column if all of its values are equal to, for example, inf.
Let's say I have the following array:
import numpy as np
m = np.array([[1,2,3,4],
[np.inf,np.inf,np.inf,np.inf],
[9,10,11,12]])
Then if I use,
row = 0
while row < m.shape[0]:
if np.all(np.isinf(m[row,:])):
m = np.delete(m,row,axis=0)
row -= 1
row += 1
print(m)
I get the output of,
[[1,2,3,4],
[9,10,11,12]]
I can use a similar method to delete a column of all infs. However, this method is quite cumbersome, so I tried using the following:
m = m[np.all(~np.isinf(m),axis=1)]
This works great when finding and deleting a row of all infs, but when I try the following to find and delete all columns that contain all infs, the method runs into problems,
m = np.array([[1,2,np.inf,4],
[5,6,np.inf,8],
[9,10,np.inf,12]])
m = m[np.all(~np.isinf(m),axis=0)]
giving the following error
IndexError: boolean index did not match indexed array along dimension 0; dimension is 3 but corresponding boolean dimension is 4
I had thought that using axis=1 would search along each row, and axis=0 would search along each column, but it seems that I might not understand how the use of axis works. Any help would be much appreciated.
Just replace the line that throws an error with this one:
m = m[:, np.all(~np.isinf(m),axis=0)]
I have a numpy array of size k, and a pandas dataframe with a column of size n>k that contains k missing values.
Is there an easy way to fill the k missing values from the numpy array correspondingly (that is, first occurred missing value in the column of the dataframe corresponds to the next value in the array)?
Something like this might work. You may also want to consider what order (i.e. sorting) you want to fill these values in.
fill_values = list(range(k)) #or whatever your array is
indicies_of_missing = df[df['myColumn'].isnull()].index # list of the missing indices
for fill_index, dataframe_index in enumerate(indicies_of_missing):
dataframe.loc[dataframe_index, 'myColumn'] = fill_values[fill_index]
I know mapping 2D array into 1D array has been asked many times, but I did not find a solution that would fit a where the column count varies.
So I want get a 1-dimensional index from this 2-dimensional array
Col> _0____1____2__
Row 0 |_0__|_1__|_2__|
V 1 |_3__|_4__|
2 |_5__|_6__|_7__|
3 |_8__|_9__|
4 |_10_|_11_|_12_|
5 |_13_|_14_|
The normal formula index = row * columns + column does not work, since after the 2nd row the index is out of place.
What is the correct formula here?
EDIT:
The specific issue is that I have a list of items in with the layout like in the grid, but a one dimensional array for the data. So while looping through the elements in the UI, I need to get the correct data, but can only get the row and column for that element. I need to find a way to turn a row/column value into an index for the data-array
Bad picture trying to explain it
A truly optimal answer (or even a provably correct one) will depend on the language you are using and how it lays out memory for such arrays.
However, taking your question simply at face value, you have to know what the actual length of each row is in order to calculate a 1D index.
So either the row length follows some pattern that can be inferred from the data, or you have (or can write) a rlen = rowLength( 2dTable, RowNumber) function.
Then, depending on how big the tables are and how fast you need to run, you can calculate a 1D index from the 2d table by adding all the previous row lengths until the current row length is less than the 2d column index.
or build a 1d table of the row lengths (or commulative rowlengths) so you can scan it and so only call your rowlength function for each row only once.
With a better description of your problem, you might get a better answer...
For your example which alternates between 3 and 2 columns you can construct a formula:
index = (row / 2) * (3 + 2) + (row % 2 ? 3 : 0) + column
(C-like syntax, assuming integer division)
In general though, the one and only way to implement what you're doing here, jagged arrays, is to make an array of arrays, a.k.a. an Iliffe vector. That means, use the row number as index into an array of pointers which point to the individual row arrays containing the actual data.
You can have an additional 1D array having the length of the columns say "length". Then your formula is index=sum {length(i)}+column. i runs from 0 to row.
I have the following cell array:
<20x2>
<32x2>
<28x2>
<30x2>
What I am trying to do is read into row 1 of the cell array which is <20x2> and once I am in <20x2> I would like to apply the following function to the first column only.
In the first one I would like every row of column 1 in C{1,1} to be subtracted by 0.1. In the second one C{2,1} (<32x2>) I would like every row of column 1 to be subtracted by 0.2 and so on...
So to clarify I am trying to subtract n*0.1 from the first column of each submatrix in the cell array where n= row number of the cell array. So if there was a section in the cell array in row 8, column 1 would be subtracted by 8*0.1 = 0.8
I hope the question is clear enough, I have tried to word it as clear as I can.
Thanks in advance for any help/suggestions
Attempt
First = C{1,1}(:,1);
Subtraction = First - 0.1
Gives me my desired result but only for row 1 of my cell array.
Unique question to Applying function to vectors row by row because this involves a cell array as opposed to a matrix. The aspect of reading into a cell array makes it a different variant of the problem so if somebody was having a similar problem to this question the mentioned 'duplicate' question would not help, especially with little MATLAB knowledge like myself
It is very easy to adapt your attempt to a loop:
for n = 1:size(c,1)
C{n,1}(:,1) = C{n,1}(:,1) - n*0.1;
end