Inconsistent Results - Jupyter Numpy & Transpose - arrays

enter image description here
I am getting odd behavior with Jupyter/Numpy/Tranpose()/1D Arrays.
I found another post where transpose() will not transpose a 1D array, but in previous Jupyter notebooks, it does.
I have an example where it is inconsistent, and I do not understand:
Please see the picture attached of my jupyter notebook if 2 more or less identical arrays with 2 different outputs.
It seems it IS and IS NOT transposing the 1D array. Inconsistency is bad
outputs is (1000,) and (1,1000), why does this occur?
# GENERATE WAVEORM:
#---------------------------------------------------------------------------------------------------
N = 1000
fxc = []
fxn = []
for t in range(0,N):
fxc.append(A1*m.sin(2.0*pi*50.0*dt*t) + A2*m.sin(2.0*pi*120.0*dt*t))
fxn.append(A1*m.sin(2.0*pi*50.0*dt*t) + A2*m.sin(2.0*pi*120.0*dt*t) + 5*np.random.normal(u,std,size=1))
#---------------------------------------------------------------------------------------------------
# TAKE TRANSPOSE:
#---------------------------------
fc = np.transpose(np.array(fxc))
fn = np.transpose(np.array(fxn))
#---------------------------------
# PRINT DIMENSION:
#---------------------------------
print(fc.shape)
print(fn.shape)
#---------------------------------

Remove size=1 from your call to numpy.random.normal. Then it will return a scalar instead of a 1-d array of length 1.
For example,
In [2]: np.random.normal(0, 3, size=1)
Out[2]: array([0.47058288])
In [3]: np.random.normal(0, 3)
Out[3]: 4.350733438283539
Using size=1 in your code is a problem, because it results in fxn being a list of 1-d arrays (e.g. something like [[0.123], [-.4123], [0.9455], ...]. When NumPy converts that to an array, it has shape (N, 1). Transposing such an array results in the shape (1, N).
fxc, on the other hand, is a list of scalars (e.g. something like [0.123, 0.456, ...]). When converted to a NumPy array, it will be a 1-d array with shape (N,). NumPy's transpose operation swaps dimensions, but it does not create new dimensions, so transposing a 1-d array does nothing.

Related

Julia iteratively make 3D array from 2D arrays

I am trying to make a 3D array from many 2D arrays.
Image Files
Each image becomes a 2D array.
https://drive.google.com/drive/folders/1xBucvqhKFjAfbRIhq5wjr40kSjNor_0t?usp=sharing
using Images, Colors
paths = readdir(
"/Users/me/Downloads/ct_scans"
, join = true
)
images_3D = []
for p = paths
img = load(p)
gray = Gray.(img)
arr = convert(Array{Float64}, gray) # <----- 2D array
append!(images_3d, arr)
end
>>> size(images_3d)
(1536000) # <--- 1D view?
>>> 1536000 == 80*160*120
true
>>> reshaped_3d = reshape(images_3d, (80,160,120))
>>> Gray.(reshaped_3d[1,:,:])
# 160x120 scrambled mess of pixels not rearranged as expected
append! makes a size== 1D array that does not reshape as expected.
Whereas push! creates an array of hard arrays that keep their shape. It’s not technically 3D, just an 80 element vector.
When I tried to initialize an empty 3D and then overwrite each 2D with my own 2D image I got Matrix{Float64} to Float64 type conversion failures.
Can’t iteratively vcat 2D arrays because cannot overwrite variables.
Part of the reason for posting this is to see how Julia programmers approach multi-dimensional arrays.
There's multiple ways to do this, you'll have to tty and test which one is the best in your case.
with append! and resize
Arrays in Julia should start iterating with the first index, which the number of images is the last index. If 80 is the amount of images, the reshape should be
reshape(images_3d, (160,120,80))
(maybe exchange 120 and 160, not sure about this one).
And then to get the first image, it's reshaped_3d[:,:,1]
with push!
push!ing the matrices and then creating the 3d array with cat would work too :
julia> A = [rand(3,4) for i in 1:2];
julia> cat(A..., dims=3)
3×4×2 Array{Float64, 3}:
[:, :, 1] =
0.372747 0.17654 0.398272 0.231992
0.514789 0.342374 0.399816 0.277959
0.908909 0.864676 0.9788 0.585375
[:, :, 2] =
0.358169 0.816448 0.0558052 0.404178
0.747453 0.80815 0.384903 0.447053
0.314895 0.46264 0.947465 0.170982
initialize the 3D Array (probably the best one)
and fill it up progressively
julia> A = Array{Float64}(undef,3,4,2);
julia> for i in 1:2
A[:,:,i] = rand(3,4)
end
julia> A
3×4×2 Array{Float64, 3}:
[:, :, 1] =
0.478106 0.829818 0.526572 0.644238
0.714812 0.781246 0.93239 0.759864
0.523958 0.955136 0.70079 0.193489
[:, :, 2] =
0.481405 0.561407 0.184557 0.449584
0.547769 0.170311 0.371797 0.538843
0.0285712 0.731686 0.00126473 0.452273
Just add to the accepted answer, looping over the first index will be even faster. Consider the following two functions, test1() is faster to run because the loop is in the first index.
aa_stack1 = zeros(3, 10000, 10000);
aa_stack3 = zeros(10000, 10000, 3);
function test1()
for ii = 1:3
aa_stack1[ii, :, :] = rand(10000, 10000)
end
end
function test2()
for ii = 1:3
aa_stack3[:, :, ii] = rand(10000, 10000)
end
end
#time test1()
#time test2()
The first way "maximizes memory locality and reduces cache misses" because "when you iterate over the first dimension, the values of the other two dimensions are kept in cache, which means that accessing them takes less time" (according to ChatGPT).

Create a 2-D Array From a Group of 1-D Arrays of Different Lengths in Python

I have 2 1-D arrays that I have combined into a single 1-D array and would like to combine them into a 2-D array with 3 columns consisting of the two arrays and the newly created combined array. Ultimately, the objective is to plot all three 1-D arrays on a single chart using Plotly. The values are datetime but I will use integers here for the sake of simplicity.
import numpy as np
a = np.array([1,3,4,5,7,9])
b = np.array([2,4,6,8])
c = np.array([1,2,3,4,5,6,7,8,9])
# The created array should be 9 rows and 3 columns that looks like:
abc = np.array([1,0,1],[0,2,2],[3,0,3],[4,4,4],[5,0,5],[0,6,6],[7,0,7],[0,8,8],[9,0,9])
Essentially, array abc is the c column repeated 3 times with zeros where there are missing values for a or b. I would prefer to do this in Numpy but am open to alternatives as well. In addition, the zeros don't have to be present and can be substituted with NaN, Null, etc. The questions I've reviewed seem to suggest that there is no way to combine arrays of different lengths but I'm certain there must be a way of combining the arrays by extending the shorter ones using indexing. I'm just having trouble getting from here to there. Any help would be greatly appreciated.
Pure numpy approach:
import numpy as np
a = np.array([1,3,4,5,7,9])
b = np.array([2,4,6,8])
c = np.array([1,2,3,4,5,6,7,8,9])
abc = np.zeros((10, 3))
# change to a loop, if you like
abc[a, 0] = a
abc[b, 1] = b
abc[c, 2] = c
print(abc[1:])
prints:
[[1. 0. 1.]
[0. 2. 2.]
[3. 0. 3.]
[4. 4. 4.]
[5. 0. 5.]
[0. 6. 6.]
[7. 0. 7.]
[0. 8. 8.]
[9. 0. 9.]]

3x3 array with random numbers

I need to take the numbers 0-8 and rearrange them randomly in a 3x3 array using a function. What is the simplest way possible?
I need to get [0,1,2,3,4,5,6,7,8] as a 3x3 array with the numbers in random order
One idea is to use a flat list/array with sorted numbers, shuffle them (e.g. using random.shuffle), then reshape it into. Python doesn't support arrays natively, so you can use lists instead, then reshape them into lists of list, like:
import random
def arrange(x, rows, cols):
random.shuffle(x)
return [x[cols * i : cols * (i + 1)] for i in range(rows)]
print(arrange(list(range(9)), 3, 3))
numpy has an array object that you can use also use, which supports reshaping etc, see the documentation, like:
import numpy as np
### Numpy's array solution
def arrange_np(x, rows, cols):
np.random.shuffle(x)
return x.reshape(rows, cols)
print(arrange_np(np.arange(9), 3, 3))

Accessing n-dimensional array in R using a function of vector of indexes

my program in R creates an n-dimensional array.
PVALUES = array(0, dim=dimensions)
where dimensions = c(x,y,z, ... )
The dimensions will depend on a particular input. So, I want to create a general-purpose code that will:
Store a particular element in the array
Read a particular element from the array
From reading this site I learned how to do #2 - read an element from the array
ll=list(x,y,z, ...)
element_xyz = do.call(`[`, c(list(PVALUES), ll))
Please help me solving #1, that is storing an element to the n-dimensional array.
Let me rephrase my question
Suppose I have a 4-dimensional array. I can store a value and read a value from this array:
PVALUES[1,1,1,1] = 43 #set a value
data = PVALUES[1,1,1,1] #use a value
How can I perform the same operations using a function of a vector of indexes:
indexes = c(1,1,1,1)
set(PVALUES, indexes) = 43
data = get(PVALUES, indexes) ?
Thank you
Thanks for helpful response.
I will use the following solution:
PVALUES = array(0, dim=dimensions) #Create an n-dimensional array
dimensions = c(x,y,z,...,n)
Set a value to PVALUES[x,y,z,...,n]:
y=c(x,y,z,...,n)
PVALUES[t(y)]=26
Reading a value from PVALUES[x,y,z,...,n]:
y=c(x,y,z,...,n)
data=PVALUES[t(y)]
The indexing of arrays can be done with matrices having the same number of columns as there are dimensions:
# Assignment with "[<-"
newvals <- matrix( c( x,y,z,vals), ncol=4)
PVALUES[ newvals[ ,-4] ] <- vals
# Reading values with "["
PVALUES[ newvals[ ,-4] ]

Despite many examples online, I cannot get my MATLAB repmat equivalent working in python

I am trying to do some numpy matrix math because I need to replicate the repmat function from MATLAB. I know there are a thousand examples online, but I cannot seem to get any of them working.
The following is the code I am trying to run:
def getDMap(image, mapSize):
newSize = (float(mapSize[0]) / float(image.shape[1]), float(mapSize[1]) / float(image.shape[0]))
sm = cv.resize(image, (0,0), fx=newSize[0], fy=newSize[1])
for j in range(0, sm.shape[1]):
for i in range(0, sm.shape[0]):
dmap = sm[:,:,:]-np.array([np.tile(sm[j,i,:], (len(sm[0]), len(sm[1]))) for k in xrange(len(sm[2]))])
return dmap
The function getDMap(image, mapSize) expects an OpenCV2 HSV image as its image argument, which is a numpy array with 3 dimensions: [:,:,:]. It also expects a tuple with 2 elements as its imSize argument, of course making sure the function passing the arguments takes into account that in numpy arrays the rows and colums are swapped (not: x, y, but: y, x).
newSize then contains a tuple containing fracions that are used to resize the input image to a specific scale, and sm becomes a resized version of the input image. This all works fine.
This is my goal:
The following line:
np.array([np.tile(sm[i,j,:], (len(sm[0]), len(sm[1]))) for k in xrange(len(sm[2]))]),
should function equivalent to the MATLAB expression:
repmat(sm(j,i,:),[size(sm,1) size(sm,2)]),
This is my problem:
Testing this, an OpenCV2 image with dimensions 800x479x3 is passed as the image argument, and (64, 48) (a tuple) is passed as the imSize argument.
However when testing this, I get the following ValueError:
dmap = sm[:,:,:]-np.array([np.tile(sm[i,j,:], (len(sm[0]),
len(sm[1]))) for k in xrange(len(sm[2]))])
ValueError: operands could not be broadcast together with
shapes (48,64,3) (64,64,192)
So it seems that the array dimensions do not match and numpy has a problem with that. But my question is what? And how do I get this working?
These 2 calculations match:
octave:26> sm=reshape(1:12,2,2,3)
octave:27> x=repmat(sm(1,2,:),[size(sm,1) size(sm,2)])
octave:28> x(:,:,2)
7 7
7 7
In [45]: sm=np.arange(1,13).reshape(2,2,3,order='F')
In [46]: x=np.tile(sm[0,1,:],[sm.shape[0],sm.shape[1],1])
In [47]: x[:,:,1]
Out[47]:
array([[7, 7],
[7, 7]])
This runs:
sm[:,:,:]-np.array([np.tile(sm[0,1,:], (2,2,1)) for k in xrange(3)])
But it produces a (3,2,2,3) array, with replication on the 1st dimension. I don't think you want that k loop.
What's the intent with?
for i in ...:
for j in ...:
data = ...
You'll only get results from the last iteration. Did you want data += ...? If so, this might work (for a (N,M,K) shaped sm)
np.sum(np.array([sm-np.tile(sm[i,j,:], (N,M,1)) for i in xrange(N) for j in xrange(M)]),axis=0)
z = np.array([np.tile(sm[i,j,:], (N,M,1)) for i in xrange(N) for j in xrange(M)]),axis=0)
np.sum(sm - z, axis=0) # let numpy broadcast sm
Actually I don't even need the tile. Let broadcasting do the work:
np.sum(np.array([sm-sm[i,j,:] for i in xrange(N) for j in xrange(M)]),axis=0)
I can get rid of the loops with repeat.
sm1 = sm.reshape(N*M,L) # combine 1st 2 dim to simplify repeat
z1 = np.repeat(sm1, N*M, axis=0).reshape(N*M,N*M,L)
x1 = np.sum(sm1 - z1, axis=0).reshape(N,M,L)
I can also apply broadcasting to the last case
x4 = np.sum(sm1-sm1[:,None,:], 0).reshape(N,M,L)
# = np.sum(sm1[None,:,:]-sm1[:,None,:], 0).reshape(N,M,L)
With sm I have to expand (and sum) 2 dimensions:
x5 = np.sum(np.sum(sm[None,:,None,:,:]-sm[:,None,:,None,:],0),1)
len(sm[0]) and len(sm[1]) are not the sizes of the first and second dimensions of sm. They are the lengths of the first and second row of sm, and should both return the same value. You probably want to replace them with sm.shape[0] and sm.shape[1], which are equivalent to your Matlab code, although I am not sure that it will work as you expect it to.

Resources