I want to define an two dimensional array as following using z3 solver using C API
a[3][3] = { {0,0,0},{0,0,0},{0,0,0}}
How to define this using Z3 solver C API where in i need to add constraints such as sum of each rows is equal to 1 and sum of each coloums should me <= 100.
Z3 supports the array theory, but is usually used to encode unbounded arrays, or arrays that are very big. This issue has been discussed in other posts (See: Create an array with fixed size and initialize it). If we search for [z3] array, we will find many other posts.
For arrays of a predefined size, it is easier (and more efficient) to create "arrays of Z3 expressions". The Sudoku example in the Z3 tutorial shows how to do it.
Here is the Python code for the problem described in your post (also available online here).
# 3x3 matrix of integer variables
A = [ [ Int("a_%s_%s" % (i+1, j+1)) for j in range(3) ]
for i in range(3) ]
print A
# Rows constraints
rows_c = [ Sum(r) == 1 for r in A ]
print rows_c
# Columns constraints
A_transpose = [ [ A[i][j] for i in range(3) ] for j in range(3) ]
cols_c = [ Sum(c) <= 10 for c in A_transpose ]
print cols_c
s = Solver()
s.add(rows_c)
s.add(cols_c)
# solve constraints
print s.check()
# print solution
m = s.model()
print m
# printing the solution in a nicer way
r = [ [ m.evaluate(A[i][j]) for j in range(3) ] for i in range(3) ]
print_matrix(r)
Related
A \ B in matlab gives a special solution while numpy.linalg.lstsq doesn't.
A = [1 2 0; 0 4 3];
b = [8; 18];
c_mldivide = A \ b
c_mldivide =
0
4
0.66666666666667
c_lstsq = np.linalg.lstsq([[1 ,2, 0],[0, 4, 3]],[[8],[18]])
print c_lstsq
c_lstsq = (array([[ 0.91803279],
[ 3.54098361],
[ 1.27868852]]), array([], dtype=float64), 2, array([ 5.27316304,1.48113184]))
How does mldivide A \ B in matlab give a special solution?
Is this solution usefull in achieving computational accuracy?
Why is this solution special and how might you implement it in numpy?
For under-determined systems such as yours (rank is less than the number of variables), mldivide returns a solution with as many zero values as possible. Which of the variables will be set to zero is up to its arbitrary choice.
In contrast, the lstsq method returns the solution of minimal norm in such cases: that is, among the infinite family of exact solutions it will pick the one that has the smallest sum of squares of the variables.
So, the "special" solution of Matlab is somewhat arbitrary: one can set any of the three variables to zero in this problem. The solution given by NumPy is in fact more special: there is a unique minimal-norm solution
Which solution is better for your purpose depends on what your purpose is. The non-uniqueness of solution is usually a reason to rethink your approach to the equations. But since you asked, here is NumPy code that produces Matlab-type solutions.
import numpy as np
from itertools import combinations
A = np.matrix([[1 ,2, 0],[0, 4, 3]])
b = np.matrix([[8],[18]])
num_vars = A.shape[1]
rank = np.linalg.matrix_rank(A)
if rank == num_vars:
sol = np.linalg.lstsq(A, b)[0] # not under-determined
else:
for nz in combinations(range(num_vars), rank): # the variables not set to zero
try:
sol = np.zeros((num_vars, 1))
sol[nz, :] = np.asarray(np.linalg.solve(A[:, nz], b))
print(sol)
except np.linalg.LinAlgError:
pass # picked bad variables, can't solve
For your example it outputs three "special" solutions, the last of which is what Matlab chooses.
[[-1. ]
[ 4.5]
[ 0. ]]
[[ 8.]
[ 0.]
[ 6.]]
[[ 0. ]
[ 4. ]
[ 0.66666667]]
i am basically trying to switch around an array of arrays; my initial data are:
array = [
[0,0,0],
[1,1,1]
]
the output should be:
[
[0,1],
[0,1],
[0,1]
]
however what i get is:
[]
i have tried doing the same thing without the loops but when i introduce them it just wont append!
see code here:
array = [
[0,0,0],
[1,1,1]
]
transformedArray = []
#add rows to transformed
for j in range(0, len(array) - 1):
transformedArray.append([])
#for each row
for i in range(0, len(array[0]) - 1):
#for each column
for k in range(0, len(array) - 1):
transformedArray[i].append(array[k][i])
can you help? i have not found any similar issues online so i am guessing i've missed something stupid!
Try nesting your loops:
array = [
[0,0,0],
[1,1,1]
]
transformedArray = [[0,0],[0,0],[0,0]]
# iterate through rows
for i in range(len(array)):
# iterate through columns
for j in range(len(array[0])):
transformedArray[j][i] = array[i][j]
for res in transformedArray:
print(res)
returns:
[0, 1]
[0, 1]
[0, 1]
Edited to Add explanation:
First, lists are defined as in this code above: aList = [ ... ] where an array would be defined as anArray = numpy.array([...]), so to the point of the comments above, this is list processing in the question, not true python array process. Next, elements are being added to the list by index, so there has to be a place to put them. I handled that by creating a list with 3 elements already in place. The original post would only create the first 2 rows and then have an index failure when the 3rd row is to be created. The nested for loops then iterate through the embedded lists.
You could do it by mapping a sequence of index-access operations over all the arrays:
for i in range( len( array[0] ) ):
transformedArray.append( map( lambda x: x[i], array ) )
In this associative lstm paper, http://arxiv.org/abs/1602.03032, they ask to permute a complex tensor.
They have provided their code here: https://github.com/mohammadpz/Associative_LSTM/blob/master/bricks.py#L79
I'm trying to replicate this in tensorflow. Here is what I have done:
# shape: C x F/2
# output = self.permutations: [num_copies x cell_size]
permutations = []
indices = numpy.arange(self._dim / 2) #[1 ,2 ,3 ...64]
for i in range(self._num_copies):
numpy.random.shuffle(indices) #[4, 48, 32, ...64]
permutations.append(numpy.concatenate(
[indices,
[ind + self._dim / 2 for ind in indices]]))
#you're appending a row with two columns -- a permutation in the first column, and the same permutation + dim/2 for imaginary
# C x F (numpy)
self.permutations = tf.constant(numpy.vstack(permutations), dtype = tf.int32) #This is a permutation tensor that has the stored permutations
# output = self.permutations: [num_copies x cell_size]
def permute(complex_tensor): #complex tensor is [batch_size x cell_size]
gather_tensor = tf.gather_nd(complex_tensor, self.permutations)
return gather_tensor
Basically, my question is: How efficiently can this be done in TensorFlow? Is there anyway to keep the batch size dimension fixed of complex tensor?
Also, is gather_nd the best way to go about this? Or is it better to do a for loop and iterate over each row in self.permutations using tf.gather?
def permute(self, complex_tensor):
inputs_permuted = []
for i in range(self.permutations.get_shape()[0].value):
inputs_permuted.append(
tf.gather(complex_tensor, self.permutations[i]))
return tf.concat(0, inputs_permuted)
I thought that gather_nd would be far more efficient.
Nevermind, I figured it out, the trick is to just use permute the original input tensor using tf transpose. This will allow you then to do a tf.gather on the entire matrix. Then you can tf concat the matrices together. Sorry if this wasted anyone's time.
I am currently looking for an efficient way to slice multidimensional matrices in MATLAB. Ax an example, say I have a multidimensional matrix such as
A = rand(10,10,10)
I would like obtain a subset of this matrix (let's call it B) at certain indices along each dimension. To do this, I have access to the index vectors along each dimension:
ind_1 = [1,4,5]
ind_2 = [1,2]
ind_3 = [1,2]
Right now, I am doing this rather inefficiently as follows:
N1 = length(ind_1)
N2 = length(ind_2)
N3 = length(ind_3)
B = NaN(N1,N2,N3)
for i = 1:N1
for j = 1:N2
for k = 1:N3
B(i,j,k) = A(ind_1(i),ind_2(j),ind_3(k))
end
end
end
I suspect there is a smarter way to do this. Ideally, I'm looking for a solution that does not use for loops and could be used for an arbitrary N dimensional matrix.
Actually it's very simple:
B = A(ind_1, ind_2, ind_3);
As you see, Matlab indices can be vectors, and then the result is the Cartesian product of those vector indices. More information about Matlab indexing can be found here.
If the number of dimensions is unknown at programming time, you can define the indices in a cell aray and then expand into a comma-separated list:
ind = {[1 4 5], [1 2], [1 2]};
B = A(ind{:});
You can reference data in matrices by simply specifying the indices, like in the following example:
B = A(start:stop, :, 2);
In the example:
start:stop gets a range of data between two points
: gets all entries
2 gets only one entry
In your case, since all your indices are 1D, you could just simply use:
C = A(x_index, y_index, z_index);
Given a vector A defined in Matlab by:
A = [ 0
0
1
0
0 ];
we can extract its dimensions using:
size(A);
Apparently, we can achieve the same things in Julia using:
size(A)
Just that in Matlab we are able to extract the dimensions in a vector, by using:
[n, m] = size(A);
irrespective to the fact whether A is one or two-dimensional, while in Julia A, size (A) will return only one dimension if A has only one dimension.
How can I do the same thing as in Matlab in Julia, namely, extracting the dimension of A, if A is a vector, in a vector [n m]. Please, take into account that the dimensions of A might vary, i.e. it could have sometimes 1 and sometimes 2 dimensions.
A = zeros(3,5)
sz = size(A)
returns a tuple (3,5). You can refer to specific elements like sz[1]. Alternatively,
m,n = size(A,1), size(A,2)
This works even if A is a column vector (i.e., one-dimensional), returning a value of 1 for n.
This will achieve what you're expecting:
n, m = size(A); #or
(n, m) = size(A);
If size(A) is a one dimensional Tuple, m will not be assigned, while n will receive length(A). Just be sure to catch that error, otherwise your code may stop if running from a script.