I'm starting out with numpy and was trying to figure out how its arrays work for column vectors. Defining the following:
x1 = np.array([3.0, 2.0, 1.0])
x2 = np.array([-2.0, 1.0, 0.0])
And calling
print("inner product x1/x2: ", np.inner(x1, x2))
Produces inner product x1/x2: -4.0 as expected - this made me think that numpy assumes an array of this form is a column vector and, as part of the inner function, tranposes one of them to give a scalar. However, I wrote some code to test this idea and it gave some results that I don't understand.
After doing some googling about how to specify that an array is a column vector using .T I defined the following:
x = np.array([1, 0]).T
xT = np.array([1, 0])
Where I intended for x to be a column vector and xT to be a row vector. However, calling the following:
print(x)
print(x.shape)
print(xT)
print(xT.shape)
Produces this:
[1 0]
(2,)
[1 0]
(2,)
Which suggests the two arrays have the same dimensions, despite one being the transpose of the other. Furthermore, calling both np.inner(x,x) and np.inner(x,xT) produces the same result. Am I misunderstanding the .T function, or perhaps some fundamental feature of numpy/linear algebra? I don't feel like x & xT should be the same vector.
Finally, the reason I initially used .T was because trying to define a column vector as x = np.array([[1], [0]]) and calling print(np.inner(x, x)) produced the following as the inner product:
[[1 0]
[0 0]]
Which is the output you'd expect to see for the outer product. Am I misusing this way of defining a column vector?
Look at the inner docs:
Ordinary inner product of vectors for 1-D arrays
...
np.inner(a, b) = sum(a[:]*b[:])
With your sample arrays:
In [374]: x1 = np.array([3.0, 2.0, 1.0])
...: x2 = np.array([-2.0, 1.0, 0.0])
In [375]: x1*x2
Out[375]: array([-6., 2., 0.])
In [376]: np.sum(x1*x2)
Out[376]: -4.0
In [377]: np.inner(x1,x2)
Out[377]: -4.0
In [378]: np.dot(x1,x2)
Out[378]: -4.0
In [379]: x1#x2
Out[379]: -4.0
From the wiki for dot/scalar/inner product:
https://en.wikipedia.org/wiki/Dot_product
two equal-length sequences of numbers (usually coordinate vectors) and returns a single number
If vectors are identified with row matrices, the dot product can also
be written as a matrix product
Coming from a linear algebra world, it easy to think of everything in terms of matrices (2d) and vectors, which are 1 row or 1 column matrices. MATLAB/Octave works in that framework. But numpy is more general, with arrays with 0 or more dimensions, not just 2.
np.transpose does not add dimensions, it just permutes the existing ones. Hence x1.T does not change anything.
A column vector can be made with np.array([[1], [0]]) or:
In [381]: x1
Out[381]: array([3., 2., 1.])
In [382]: x1[:,None]
Out[382]:
array([[3.],
[2.],
[1.]])
In [383]: x1.reshape(3,1)
Out[383]:
array([[3.],
[2.],
[1.]])
np.inner describes what happens when the inputs not 1d, such as your 2d (2,1) shape x. It says it uses np.tensordot which is a generalization of np.dot, matrix product.
In [386]: x = np.array([[1],[0]])
In [387]: x
Out[387]:
array([[1],
[0]])
In [388]: np.inner(x,x)
Out[388]:
array([[1, 0],
[0, 0]])
In [389]: np.dot(x,x.T)
Out[389]:
array([[1, 0],
[0, 0]])
In [390]: x*x.T
Out[390]:
array([[1, 0],
[0, 0]])
This is the elementwise product of (2,1) and (1,2) resulting in a (2,2), or outer product.
Related
Dear stackoverflow community,
Currently, I am attempting to solve an optimisation problem whereby given an array of numbers (integer or float; non-negative) and a positive integer M, output M number of subsets (of any suitable length) such that the subset with the highest sum among the subsets is minimised. The elements in the subsets can be non-contiguous.
For example, given an array of [1, 4, 5, 3] and an integer M = 2, the
desired output is [1, 5] and [4, 3], whereby the highest subset sum
is 7 which is minimised.
Another example, given an array of [3, 10, 7, 2] and an integer M = 3,
the desired output is [3, 2], [7], and [10] or even [3, 7], [2], and
[10] where by the minimised highest subset sum is 10.
Is there anyone who have experienced such an optimisation before? I believe this is a variant of the Kadane algorithm.
Any links, pseudo code, pythonic code, and etc. are very much appreciated.
I have thought the following procedure to solve the problem:
Sort the array in ascending order
Initialise M number of empty subsets
In a while loop, add the smallest and largest availableelement to each subset until no more elements are left to be selected from the parent array
This can be viewed as a variant of an assignment problem:
Let v[i] be our data array and let j=1..M indicate the subsets. Introduce binary variables x[i,j] ∈ {0,1} with:
x[i,j] = 1 if value i is assigned to subset j
0 otherwise
Our optimization model can look like:
min z
z >= sum(i, v[i]*x[i,j]) ∀j "largest sum"
sum(j, x[i,j]) = 1 ∀i "assign each value to a subset"
x[i,j] ∈ {0,1}
This is a linear mixed-integer programming problem and can be solved with any MIP solver.
A result using random data:
---- 34 PARAMETER results
value sub1 sub2 sub3 sub4 sub5 sub6 sub7 sub8 sub9 sub10
i1 17.175 17.175
i2 84.327 84.327
i3 55.038 55.038
i4 30.114 30.114
i5 29.221 29.221
i6 22.405 22.405
i7 34.983 34.983
i8 85.627 85.627
i9 6.711 6.711
i10 50.021 50.021
i11 99.812 99.812
i12 57.873 57.873
i13 99.113 99.113
i14 76.225 76.225
i15 13.069 13.069
i16 63.972 63.972
i17 15.952 15.952
i18 25.008 25.008
i19 66.893 66.893
i20 43.536 43.536
i21 35.970 35.970
i22 35.144 35.144
i23 13.149 13.149
i24 15.010 15.010
i25 58.911 58.911
total 114.181 114.582 114.123 112.961 113.949 113.548 114.478 112.809 110.635 113.993
notes:
The solution is, in general, not unique.
It is easy to add a constraint that forbids empty subsets. (Can happen with more skewed data sets).
Dear friends in stack overflow,
I have trouble calculation with Numpy and Sympy. A is defined by
import numpy as np
import sympy as sym
sym.var('x y')
f = sym.Matrix([0,x,y])
func = sym.lambdify( (x,y), f, "numpy")
X=np.array([1,2,3])
Y=np.array((1,2,3])
A = func(X,Y).
Here, X and Y are just examples. In general, X and Y are one dimensional array in numpy, and they have the same length. Then, A’s output is
array([[0],
[array([1, 2, 3])],
[array([1, 2, 3])]], dtype=object).
But, I'd like to get this as
np.array([[0,0,0],[1,2,3],[1,2,3]]).
If we call this B, How do you convert A to B automatically. B’s first column is filled by 0, and it has the same length with X and Y.
Do you have any ideas?
First let's make sure we understand what is happening:
In [52]: x, y = symbols('x y')
In [54]: f = Matrix([0,x,y])
...: func = lambdify( (x,y), f, "numpy")
In [55]: f
Out[55]:
⎡0⎤
⎢ ⎥
⎢x⎥
⎢ ⎥
⎣y⎦
In [56]: print(func.__doc__)
Created with lambdify. Signature:
func(x, y)
Expression:
Matrix([[0], [x], [y]])
Source code:
def _lambdifygenerated(x, y):
return (array([[0], [x], [y]]))
See how the numpy function looks just like the sympy, replacing sym.Matrix with np.array. lambdify just does a lexographic translation; it does not have a deep knowledge of the differences between the languages.
With scalars the func runs as expected:
In [57]: func(1,2)
Out[57]:
array([[0],
[1],
[2]])
With arrays the results is this ragged array (new enough numpy adds this warning:
In [59]: func(np.array([1,2,3]),np.array([1,2,3]))
<lambdifygenerated-2>:2: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return (array([[0], [x], [y]]))
Out[59]:
array([[0],
[array([1, 2, 3])],
[array([1, 2, 3])]], dtype=object)
If you don't know numpy, sympy is not a short cut to filling in your knowledge gaps.
The simplest fix is to replace original 0 with another symbol.
Even in sympy, the 0 is not expanded:
In [65]: f.subs({x:Matrix([[1,2,3]]), y:Matrix([[4,5,6]])})
Out[65]:
⎡ 0 ⎤
⎢ ⎥
⎢[1 2 3]⎥
⎢ ⎥
⎣[4 5 6]⎦
In [74]: Matrix([[0,0,0],[1,2,3],[4,5,6]])
Out[74]:
⎡0 0 0⎤
⎢ ⎥
⎢1 2 3⎥
⎢ ⎥
⎣4 5 6⎦
In [75]: Matrix([[0],[1,2,3],[4,5,6]])
...
ValueError: mismatched dimensions
To make the desired array in numpy we have to do something like:
In [71]: arr = np.zeros((3,3), int)
In [72]: arr[1:,:] = [[1,2,3],[4,5,6]]
In [73]: arr
Out[73]:
array([[0, 0, 0],
[1, 2, 3],
[4, 5, 6]])
That is, initial the array and fill selected rows. There isn't simple expression that will do the desired 'automaticlly fill the first row with 0', much less something that can be naively translated from sympy.
mean = [0, 0]
cov = [[1, 0], [0, 100]]
gg = np.random.multivariate_normal(mean, cov, size = [5, 12])
I get an array which has 2 columns and 12 rows, i want to take the first column which will include all 12 rows and convert them to columns. What is the appropriate method for sclicing and how can one convet the result to columns? To be precise, looking at the screen (the second one) one should take all 0 column columns and convert them in a normal way from the left to the right
the results should be like this (the first screen)
The problem is that your array gg is not two- but three-dimensional. So, what you need is in fact the first column of each stacked 2D array. Here is an example:
import numpy as np
x = np.random.randint(0, 10, (3, 4, 5))
x[:, :, 0].flatten()
The colon in slicing means "all values in this dimension". So, x[:, :, 0] means "all values in the the first dimension and all values in the second dimension and with third dimension fixed on index 0". This results in a two-dimensional array, which you have to flatten additionally.
Let's say I want to iterate over a numpy array and print each item. I'm going to use this later on to manipulate the (i,j) entry in my array depending on some rules.
I've read the numpy docs and it seems like you can access individual elements in an array easily enough using similar indexing(or slicing) to lists. But it seems that I am unable to do anything with each (i,j) entry when I try to access it in a loop.
row= 3
column = 2
space = np.random.randint(2, size=(row, column))
print space, "\n"
print space[0,1]
print space[1,0] #test if I can access indiivdual elements
output:
[[1,1
[1,1
[0,0]]
1
1
for example, using the above I want to iterate over every row and column and print each entry. I would think to use something like the following:
for i in space[0:row,:]:
for j in space[:,0:column]:
print space[i,j]
the output I get is
[1,1]
[1,1]
[1,1]
[1,1]
[1,1]
[1,1]
[1,1]
[1,1]
[1,1]
Obviously this does not work. I believe the problem is that I'm accessing entire rows and columns instead of elements within any given row and column. I've been going over the numpy docs for a couple of hours and I am still unsure of how to go about this.
My main concern is I want to change each (i,j) entry by using a loop and some conditionals, for example (using the above loop):
for i in space[0:row,:]:
for j in space[:,0:column]:
if [i+1,j] + [i,j+1] == 2:
[i,j] = 1
Start with:
for i in range(row):
for j in range(column):
print space[i,j]
You are generating indices in your loops which index some element then!
The relevant numpy docs on indexing are here.
But it looks, that you should also read up basic python-loops.
Start simple and read some docs and tutorials. After i saw Praveen's comment, i felt a bit bad with this simple answer here which does not offer much more than his comment, but maybe the links above are just what you need.
A general remark on learning numpy by trying:
regularly use arr.shape to check the dimensions
regularly use arr.dtype to check the data-type
So in your case the following should have given you a warning (not a python one; one in your head) as you probably expected i to iterate over values of one dimension:
print((space[0:row,:]).shape)
# output: (3, 2)
There are many ways of iterating over a 2d array:
In [802]: x=np.array([[1,1],[1,0],[0,1]])
In [803]: print(x) # non-iteration
[[1 1]
[1 0]
[0 1]]
by rows:
In [805]: for row in x:
...: print(row)
[1 1]
[1 0]
[0 1]
add enumerate to get an index as well
In [806]: for i, row in enumerate(x):
...: row += i
In [807]: x
Out[807]:
array([[1, 1],
[2, 1],
[2, 3]])
double level iteration:
In [808]: for i, row in enumerate(x):
...: for j, v in enumerate(row):
...: print(i,j,v)
0 0 1
0 1 1
1 0 2
1 1 1
2 0 2
2 1 3
of course you could iterate on ranges:
for i in range(x.shape[0]):
for j in range(x.shape[1]):
x[i,j]...
for i,j in np.ndindex(x.shape):
print(i,j,x[i,j])
Which is best depends, in part, on whether you need to just use the values, or need to modify them. If modifying you need an understanding of whether the item is mutable or not.
But note that I can remove the +1 without explicit iteration:
In [814]: x-np.arange(3)[:,None]
Out[814]:
array([[1, 1],
[1, 0],
[0, 1]])
What is the simplest and most efficient ways in numpy to generate two orthonormal vectors a and b such that the cross product of the two vectors equals another unit vector k, which is already known?
I know there are infinitely many such pairs, and it doesn't matter to me which pairs I get as long as the conditions axb=k and a.b=0 are satisfied.
This will do:
>>> k # an arbitrary unit vector k is not array. k is must be numpy class. np.array
np.array([ 0.59500984, 0.09655469, -0.79789754])
To obtain the 1st one:
>>> x = np.random.randn(3) # take a random vector
>>> x -= x.dot(k) * k # make it orthogonal to k
>>> x /= np.linalg.norm(x) # normalize it
To obtain the 2nd one:
>>> y = np.cross(k, x) # cross product with k
and to verify:
>>> np.linalg.norm(x), np.linalg.norm(y)
(1.0, 1.0)
>>> np.cross(x, y) # same as k
array([ 0.59500984, 0.09655469, -0.79789754])
>>> np.dot(x, y) # and they are orthogonal
-1.3877787807814457e-17
>>> np.dot(x, k)
-1.1102230246251565e-16
>>> np.dot(y, k)
0.0
Sorry, I can't put it as a comment because of a lack of reputation.
Regarding #behzad.nouri's answer, note that if k is not a unit vector the code will not give an orthogonal vector anymore!
The correct and general way to do so is to subtract the longitudinal part of the random vector. The general formula for this is
here
So you simply have to replace this in the original code:
>>> x -= x.dot(k) * k / np.linalg.norm(k)**2
Assume the vector that supports the orthogonal basis is u.
b1 = np.cross(u, [1, 0, 0]) # [1, 0, 0] can be replaced by other vectors, just get a vector orthogonal to u
b2 = np.cross(u, b1)
b1, b2 = b1 / np.linalg.norm(b1), b2 / np.linalg.norm(b2)
A shorter answer if you like.
Get a transformation matrix
B = np.array([b1, b2])
TransB = np.dot(B.T, B)
u2b = TransB.dot(u) # should be like [0, 0, 0]