doing algebra with an MxNx3 array using vectorization in python? - arrays

Suppose I have an MxNx3 array A, where the first two indexes refer to the coordinates a point, and the last index (the number '3') refers to the three components of a vector. e.g. A[4,7,:] = [1,2,3] means that the vector at point (7,4) is (1,2,3).
Now I need to implement the following operations:
Lx = D*ux - (x-xo)
Ly = D*uy + (y-yo)
Lz = D
where D, ux, uy, xo, yo are all constants that are already known. Lx, Ly and Lz are the three components of the vector at each point (x,y) (note: x is the column index and y is the row index respectively). The biggest problem is about the x-xo and y-yo, as x and y are different for different points. So how to carry out these operations for an MxNx3 array efficiently, using vectorized code or some other fast methods?
thanks

You could use the meshgrid function from numpy:
import numpy as np
M=10
N=10
D=1
ux=0.5
uy=0.5
xo=1
yo=1
A=np.empty((M,N,3))
x=range(M)
y=range(N)
xv, yv = np.meshgrid(x, y, sparse=False, indexing='ij')
A[:,:,0]=D*ux - (xv-xo)
A[:,:,1]=D*uy - (yv-yo)
A[:,:,2]=D

If you want to operate on the X and Y values, you should include them in the matrix (or in other matrix) instead of relying in their indexes.
For that, you could use some of range creation routines from Numpy, specially numpy.mgrid.

Related

Tensorflow: analogue for numpy.take?

Is there analogue for numpy.take?
I want to form N+1-dimensional array from N-dimensional array, more precisely from array with shape (B, H, W, C) I want to make (B, H, W, X, C) array.
I suppose that for my case there is solution even without such general operation. But I'm really unsure that if I will write code with multiple intermediate operations and tensors (shifting, repeating and so on) TF will be able to optimize it and remove unnecessary operations. Moreover I suppose that such code will be unclean and just awful.
I want to add dimension with shifted values. I.e. for (H,W)->(H,W,3) dimensions case indices must be
[
[[0,0], #[0,-1], may be padding with zeros but for now pad with edge value
[0,0],
[0,1]],
[[0,0],
[0,1]
[0,2]]
...
[[1,0],
[1,0],
[1,1]],
[[1,0],
[1,1],
[1,2]],
...
]
I thought about tf.scatter_nd (https://www.tensorflow.org/api_docs/python/tf/scatter_nd) but for now I don't understand how to use it. If I understand correctly, I can't use indices with shapes larger than shapes of update array (i.e. I can't use indices with shape (3,4,5,3) and update with shape (3,4,3) or even (3,4,1,3). If it's so then this operation seems useless until I make intermediate array with shape that I need to form in result.
UPD: may be I'm wrong and tensors operations (shifting, tiling and so on) is more appropriate and efficient solution.
But in any case I think that analogue for np.take will be useful.
The closest function in tensorflow to np.take are tf.gather and tf.gather_nd.
tf.gather_nd is more general than tf.gather (and np.take) as it can slices through several dimensions at once.
A noticeable restriction of tf.gather[_nd] compared to np.take is that they slice through the first dimensions of the tensor only -- you can't slice through inner dimensions. When you want to slice through an arbitrary dimension (as in your case), you need to transpose the array to put the slice dimensions first, gather, then transpose back.
Exemplary code for tf.gather replacing np.take:
import numpy as np
a = np.array([5, 7, 42])
b = np.random.randint(0, 3, (2, 3, 4))
c = a[b]
result_numpy = np.take(a, b)
print(a, b, c, result_numpy)
import tensorflow as tf
a = tf.convert_to_tensor(a)
b = tf.convert_to_tensor(b)
# c = a[b] # does not work
result_tf = tf.gather(a, b)
print(a, b, result_tf)
assert(np.array_equal(result_numpy, result_tf.numpy()))

Vectorization in Matlab, incomprehensible syntax difference causing failure

I fail to understand why, in the below example, only x1 turns into a 1000 column array while y is a single number.
x = [0:1:999];
y = (7.5*(x))/(18000+(x));
x1 = exp(-((x)*8)/333);
Any clarification would be highly appreciated!
Why is x1 1x1000?
As given in the documentation,
exp(X) returns the exponential eˣ for each element in array X.
Since x is 1x1000, so -(x*8)/333 is 1x1000 and when exp() is applied on it, exponentials of all 1000 elements are computed and hence x1 is also 1x1000. As an example, exp([1 2 3]) is same as [exp(1) exp(2) exp(3)].
Why is y a single number?
As given in the documentation,
If A is a rectangular m-by-n matrix with m~= n, and B is a matrix
with n columns, then x = B/A returns a least-squares solution of the
system of equations x*A = B.
In your case,
A is 18000+x and size(18000+x) is 1x1000 i.e. m=1 and n=1000, and m~=n
and B is 7.5*x which has n=1000 columns.
⇒(7.5*x)/(18000+x) is returning you least-squares solution of equations x*(18000+x) = 7.5*x.
Final Remarks:
x = [0:1:999];
Brackets are unnecessary here and it should better be use like this: x=0:1:999 ;
It seems that you want to do element-wise division for computing x1 for which you should use ./ operator like this:
y=(7.5*x)./(18000+x); %Also removed unnecessary brackets
Also note that addition is always element-wise. .+ is not a valid MATLAB syntax (It works in Octave though). See valid arithmetic array and matrix operators in MATLAB here.
‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍3. x1 also has some unnecessary brackets.
The question has already been answered by other people. I just want to point out a small thing. You do not need to write x = 0:1:999. It is better written as x = 0:999 as the default increment value used by MATLAB or Octave is 1.
Try explicitly specifying that you want to do element-wise operations rather than matrix operations:
y = (7.5.*(x))./(18000+(x));
In general, .* does elementwise multiplication, ./ does element-wise division, etc. So [1 2] .* [3 4] yields [3 8]. Omitting the dots will cause Matlab to use matrix operations whenever it can find a reasonable interpretation of your inputs as matrices.

Evaluate Matrix on parametrized indices

I have a matrix, say F= magic(8), whose elements are indicesed by x and y in 1:N in both dimensions.
I have a (1-D) parameter that specifies a subset of the possible coordinates (x,y), i.e. x(b(k)) and y(b(k)) with size(b)=[M,1] give me
M coordinates (x(b(k)),y(b(k))) where I want to evaluate F.
Is it possible to access F((x(b(k)),y(b(k)))) for k=1:M without writing a for loop?
I am looking a quicker solution that running the loop
F= magic(8)
for k=1:M
do_something_on(F((x(b(k)),y(b(k)))))
end
Note that if I write
F((x(b(1:M)),y(b(1:M)))
I get a M x M matrix, where the diagonal elements are the ones I am looking for, but I rather do not build the whole M x M matrix to extract the diagonal.
Instead of F(x(b), y(b)) that is giving you a matrix, you can use:
arrayfun(#(bk) F(x(bk), y(bk)), b)
or:
F(sub2ind(size(F), x(b), y(b)))
I have probably found the solution, I have to use the vectorized form for the matrix F, that is F(:), and evaluate it in (y-1)*size(F,1)+x, i.e.
F((y(b)-1)*size(F,1)+x(b))

Generating random vector that's linearly independent of a set of vectors

I'm trying to come up with an algorithm that will allow me to generate a random N-dimensional real-valued vector that's linearly independent with respect to a set of already-generated vectors. I don't want to force them to be orthogonal, only linearly independent. I know Graham-Schmidt exists for the orthogonalization problem, but is there a weaker form that only gives you linearly independent vectors?
Step 1. Generate random vector vr.
Step 2. Copy vr to vo and update as follows: for every already generated vector v in v1, v2... vn, subtract the projection of vo on vi.
The result is a random vector orthogonal to the subspace spanned by v1, v2... vn. If that subspace is a basis, then it is the zero vector, of course :)
The decision of whether the initial vector was linearly independent can be made based on the comparison of the norm of vr to the norm of vo. Non-linearly independent vectors will have a vo-norm which is zero or nearly zero (some numerical precision issues may make it a small nonzero number on the order of a few times epsilon, this can be tuned in an application-dependent way).
Pseudocode:
vr = random_vector()
vo = vr
for v in (v1, v2, ... vn):
vo = vo - dot( vr, v ) / norm( v )
if norm(vo) < k1 * norm(vr):
# this vector was mostly contained in the spanned subspace
else:
# linearly independent, go ahead and use
Here k1 is a very small number, 1e-8 to 1e-10 perhaps?
You can also go by the angle between vr and the subspace: in that case, calculate it as theta = arcsin(norm(vo) / norm(vr)). Angles substantially different from zero correspond to linearly independent vectors.
A somewhat OTT scheme is to generate a NxN non-singular matrix, and use it's columns (or rows) as the N linearly independent vectors.
To generate a non=singular matrix one could generate it's SVD and multiply up. In more detail:
a/ generate a 'random' NxN orthogonal matrix U
b/ generate a 'random' NxN diagonal matrix S with positive numbers in the diagonal
c/ generate a 'random' NxN orthogonal matrix V
d/ compute
M = U*S*V'
To generate a 'random' orthogonal matrix U, one can use the fact that every orthogonal matrix can be written as a product of Household relectors, that is of matrices of the form
H(v) = I - 2*v*v'/(v'*v)
where v is a non zero random vector.
So one could
initialise U to I
for( i=1..N)
generate a none zero vector v
update: U := H(v)*U
Note that if all these matrix multiplications become burdonesome, one could write a special routine to do the update of U. Applying H(v) to a vector u is O(N):
u -> u - 2*(h'*u)/(h'*h) * h
and so applying H to U can be done in O(N squared) rather than O( N cubed)
One advantage of this scheme is that one has some control over 'how linearly independent' the vectors are. The product of the diagonal elements is (up to sign) the determinant of M, so that if this product is 'very small' the vectors are 'almost' linearly dependent

Vector norm of an array of vectors in MATLAB

When calling norm on a matrix in MATLAB, it returns what's known as a "matrix norm" (a scalar value), instead of an array of vector norms. Is there any way to obtain the norm of each vector in a matrix without looping and taking advantage of MATLAB's vectorization?
You can compute the norm of each column or row of a matrix yourself by using element-wise arithmetic operators and functions defined to operate over given matrix dimensions (like SUM and MAX). Here's how you could compute some column-wise norms for a matrix M:
twoNorm = sqrt(sum(abs(M).^2,1)); %# The two-norm of each column
pNorm = sum(abs(M).^p,1).^(1/p); %# The p-norm of each column (define p first)
infNorm = max(M,[],1); %# The infinity norm (max value) of each column
These norms can easily be made to operate on the rows instead of the columns by changing the dimension arguments from ...,1 to ...,2.
From version 2017b onwards, you can use vecnorm.
The existing implementation for the two-norm can be improved.
twoNorm = sqrt(sum(abs(M).^2,1)); # The two-norm of each column
abs(M).^2 is going to be calculating a whole bunch of unnecessary square roots which just get squared straightaway.
Far better to do:
twoNorm = sqrt(
sum( real(M .* conj(M)), 1 )
)
This efficiently handles real and complex M.
Using real() ensures that sum and sqrt act over real numbers (rather than complex numbers with 0 imaginary component).
Slight addition to P i's answer:
norm_2 = #(A,dim)sqrt( sum( real(A).*conj(A) , dim) )
allows for
B=magic([2,3])
norm_2( B , 1)
norm_2( B , 2)
or as this if you want a norm_2.m file:
function norm_2__ = norm_2 (A_,dim_)
norm_2__ = sqrt( sum( real(A_).*conj(A_) , dim_) ) ;
end

Resources