How to decompose a N x N unitary matrix into 2- or 1-qubit operators? - quantum-computing

Given an N x N unitary operator M, I would like to build a circuit that does the same operation as M by explicitly inputting the gates myself (let's say into the IBMQ composer). I heard that 2-qubit operators could be decomposed using a Qiskit built-in function, however I was wondering if such a thing existed for a general case.
More concretely, given a N x N unitary operator M, I would like to decompose it to something of the form
M_1 x M_2 x M_3 x ... x M_n
where "x" represents the tensor product and M_i is either a 2- or 1-qubit unitary operator.
Is there a way to do this programatically, or can it be done by hand on paper in an algorithmic way?
Thank you in advance!

If you want to implement custom unitary, there is a way to do it using Operator function, like this (example for 4x4 unitary matrix):
from qiskit import QuantumRegister, QuantumCircuit
from qiskit.quantum_info.operators import Operator
q = QuantumRegister(2,"qreg")
qc = QuantumCircuit(q)
customUnitary = Operator([
[1, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 0],
[0, 1, 0, 0]
])
qc.unitary(customUnitary, [q[0], q[1]], label='custom')
qc.draw(output='mpl')
But if your purpose is to decompose it to 1 or 2-qubit operators, the problem is more complex since there can be multiple ways to decompose the same unitary.
I think the best you can do is to use Qiskit transpiler and define set of gates you want to use:
from qiskit.compiler import transpile
newCircuit = transpile(qc, basis_gates=['ry', 'rx', 'cx'], optimization_level = 3)
newCircuit.draw(output='mpl')

Related

How to get a sub-shape of an array in Python?

Not sure the title is correct, but I have an array with shape (84,84,3) and I need to get subset of this array with shape (84,84), excluding that third dimension.
How can I accomplish this with Python?
your_array[:,:,0]
This is called slicing. This particular example gets the first 'layer' of the array. This assumes your subshape is a single layer.
If you are using numpy arrays, using slices would be a standard way of doing it:
import numpy as np
n = 3 # or any other positive integer
a = np.empty((84, 84, n))
i = 0 # i in [0, n]
b = a[:, :, i]
print(b.shape)
I recommend you have a look at this.

numpy argmax in array with multiple brackets

I have an issue in apply argmax to an array which has multiple brackets.
In real life I am getting this as a result of a pytorch tensor.
Here I can put an example:
a = np.array([[1.0, 1.1],[2.1,2.0]])
np.argmax(a,axis=1)
array([1, 0])
It is correct. But:
a = np.array([[[1.0, 1.1]],[[2.1,2.0]]])
np.argmax(a,axis=1)
array([[0, 0],
[0, 0]])
It does not give me what I expect.
Consider that in reality I have this level of inner brackets:
a = np.array([[[[1.0, 1.1]]],[[[2.1,2.0]]]])
Use .squeeze() and a negative index.
a = np.array([[[[1.0, 1.1]]], [[[2.1, 2.0]]]])
np.argmax(a, axis = -1).squeeze()
array([1, 0], dtype=int32)
A possible solution is to increment axis value:
a = np.array([[[[1.0, 1.1]]],[[[2.1,2.0]]]])
np.argmax(a,axis=3)
array([[[1]],
[[0]]])
But I still have inner brackets.

Looping through slices of Theano tensor

I have two 2D Theano tensors, call them x_1 and x_2, and suppose for the sake of example, both x_1 and x_2 have shape (1, 50). Now, to compute their mean squared error, I simply run:
T.sqr(x_1 - x_2).mean(axis = -1).
However, what I wanted to do was construct a new tensor that consists of their mean squared error in chunks of 10. In other words, since I'm more familiar with NumPy, what I had in mind was to create the following tensor M in Theano:
M = [theano.tensor.sqr(x_1[:, i:i+10] - x_2[:, i:i+10]).mean(axis = -1) for i in xrange(0, 50, 10)]
Now, since Theano doesn't have for loops, but instead uses scan (which map is a special case of), I thought I would try the following:
sequence = T.arange(0, 50, 10)
M = theano.map(lambda i: theano.tensor.sqr(x_1[:, i:i+10] - x_2[:, i:i+10]).mean(axis = -1), sequence)
However, this does not seem to work, as I get the error:
only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices
Is there a way to loop through the slices using theano.scan (or map)? Thanks in advance, as I'm new to Theano!
Similar to what can be done in numpy, a solution would be to reshape your (1, 50) tensor to a (1, 10, 5) tensor (or even a (10, 5) tensor), and then to compute the mean along the second axis.
To illustrate this with numpy, suppose I want to compute means by slices of 2
x = np.array([0, 2, 0, 4, 0, 6])
x = x.reshape([3, 2])
np.mean(x, axis=1)
outputs
array([ 1., 2., 3.])

How to vectorize NumPy polyder function?

I would like to vectorize the NumPy function polyder, which computes derivatives of polynomials. Is there a simple way or a built-in function to do it?
With vectorize, I mean that if the input is an array of polynomials, the output would be the array with the derivative of the polynomials.
An example:
p = np.array([[3,4,5], [1,2]])
the output should be something like
np.array([[6, 4], [1]])
Since your subarrays, both input and output, can have different lengths, you are better off treating both as lists.
In [97]: [np.polyder(d) for d in [[3,4,5],[1,2]]]
Out[97]: [array([6, 4]), array([1])]
Your p is just a list in an expensive (timewise) array wrapper.
In [100]: p=np.array([[3,4,5],[1,2]])
In [101]: p
Out[101]: array([[3, 4, 5], [1, 2]], dtype=object)
There is little that you can do with such an array that you can't do just as well with a list. Do some time tests. You probably will find that iterating over the arrays of objects is slower than iteration over equivalent lists, especially if you take into account the time it takes convert a list to array.
It can also be tricky to create such arrays. If all the sublists are the same length the result will be a 2d array. Forcing them to be an object array takes special initiation.
A general rull of thumb is - if individual steps work with arrays or lists of different length, you probably can't vectorize. That is, you can't form a rectangular 2d array and apply vector operations.
If the polynomial lists were all the same length, then p could be 2d, and the result could also be that:
In [107]: p=np.array([[3,4,5],[0,1,2]])
In [108]: p
Out[108]:
array([[3, 4, 5],
[0, 1, 2]])
In [109]: np.array([np.polyder(i) for i in p])
Out[109]:
array([[6, 4],
[0, 1]])
In effect it is iterating over the rows of p, and then reassembling the result into an array. There are some numpy functions that streamline iteration (but don't speed it up much), but I see little need for those here.
Looking at the code of this function, the core is:
p = NX.asarray(p)
n = len(p) - 1
y = p[:-1] * NX.arange(n, 0, -1)
which for this 2d array, (len 3) is:
In [117]: p[:,:-1]*np.arange(2,0,-1)
Out[117]:
array([[6, 4],
[0, 1]])
So if the number of polynomials are all the same, this simple multiplication gives the 1st order derivative coefficients. And of course the rows can be padded so they are all the same. So 'vectorization' is easier than I initially thought.
import numpy as np
p = np.array([[3,4,5], [1,2]])
np.array([np.polyder(coefficients) for coefficients in p]) # array([[6 4], [1]], dtype=object)
would fulfill your interface for your specific example. But as hpaulj mentions, there's little sense in working with NumPy arrays instead of normal python lists here, and no actual (hardware-level) vectorization will happen. (Though, as with list comprehensions in general, the interpreter would be free to employ other means of parallelism to compute them.)

Ordered cartesian product of arrays

In efficient sorted Cartesian product of 2 sorted array of integers a lazy algorithm is suggested to generate ordered cartesian products for two sorted integer arrays.
I curious to know if there is a generalisation of this algorithm to more arrays.
For example say we have 5 sorted arrays of doubles
(0.7, 0.2, 0.1)
(0.6, 0.3, 0.1)
(0.5, 0.25, 0.25)
(0.4, 0.35, 0.25)
(0.35, 0.35, 0.3)
I am interested in generating the ordered cartesian product without having to calculate all possible combinations.
Appreciate any ideas on how a possible lazy cartesian product algorithm would possibly extend to dimensions beyond 2.
This problem appears to be an enumeration instance of uniform-cost-search (see for ex. https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm ). Your state-space is defined by the set of current indexes pointing to your sorted arrays. The successor function is an enumeration of possible index increments for every array. For your given example of 5 arrays, the initial state is (0, 0, 0, 0, 0).
There is no goal state check function as we need to go through all possibilities. The result is guaranteed to be sorted if all the input arrays are sorted.
Assuming we have m arrays of length n each, then the complexity of this method is O((n^m).log(n(m-1)).
Here is a sample implementation in python:
from heapq import heappush, heappop
def cost(s, lists):
prod = 1
for ith, x in zip(s, lists):
prod *= x[ith]
return prod
def successor(s, lists):
successors = []
for k, (i, x) in enumerate(zip(s, lists)):
if i < len(x) - 1:
t = list(s)
t[k] += 1
successors.append(tuple(t))
return successors
def sorted_product(initial_state, lists):
fringe = []
explored = set()
heappush(fringe, (-cost(initial_state, lists), initial_state))
while fringe:
best = heappop(fringe)[1]
yield best
for s in successor(best, lists):
if s not in explored:
heappush(fringe, (-cost(s, lists), s))
explored.add(s)
if __name__ == '__main__':
lists = ((0.7, 0.2, 0.1),
(0.6, 0.3, 0.1),
(0.5, 0.25, 0.25),
(0.4, 0.35, 0.25),
(0.35, 0.35, 0.3))
init_state = tuple([0]*len(lists))
for s in sorted_product(init_state, lists):
s_output = [x[i] for i, x in zip(s, lists)]
v = cost(s, lists)
print '%s %s \t%s' % (s, s_output, cost(s, lists))
So, if you have A(A1, ..., An) and B(B1, ..., Bn).
A < B if and only if
A1 * ... * An < B1 * ... * Bn
I'm assuming that every value is positive, because if we allow negatives, then:
(-50, -100, 1) > (1, 2, 3)
as -50 * (-100) * 1 = 5000 > 6 = 1 * 2 * 3
Even without negative values, the problem is still rather complex. You need a solution which would include a data structure, which would have a depth of k. If (A1, ..., Ak) < (B1, ..., Bk), then we can assume that on other dimensions, a combination of (A1, ..., Ak, ... An) is probably smaller than a combination of (B1, ..., Bk, ..., Bn). As a result, wherever this is not true, the case beats the probability, so those would be the exceptions of the rule. The data-structure should hold:
k
the first k elements of A and B respectively
description of the exceptions from the rule
For any of such exceptions, there might be a combination of (C1, ..., Ck) which is bigger than (B1, ..., Bk), but the bigger combination of (C1, ..., Ck) might still have combinations using values of further dimensions where exceptions of the rule of (A1, ..., Ak) < (C1, ..., Ck) might be still present.
So, if you already know that (A1, ..., Ak) < (B1, ..., Bk), then first you have to check whether there are exceptions by finding the first l dimensions where upon choosing the biggest possible values for A and the smallest possible values for B. If such l exists, then you should find where the exception starts (which dimension, which index). This would describe the exception. When you find an exception, you know that the combination of (A1, ..., Ak, ..., Al) > (B1, ..., Bk, ..., Bl), so here the rule is that A is bigger than B and an exception would be present when B becomes bigger than A.
To reflect this, the data-structure would look like:
class Rule {
int k;
int[] smallerCombinationIndexes;
int[] biggerCombinationIndexes;
List<Rule> exceptions;
}
Whenever you find an exception to a rule, the exception would be generated based on prior knowledge. Needless to say that the complexity greatly increases, but problem is that you have exceptions for the rules, exceptions for the exceptions and so on. The current approach would tell you that if you take two random points, A and B, whether A is smaller than B and it would also tell you that if you take combinations of (A1, ..., Ak) and (B1, ..., Bk), then what is the key indexes where the result of the comparison of (A1, ..., Ak) and (B1, ..., Bk) would change. Depending on your exact needs this idea might be enough or could need extensions. So the answer to your question is: yes, you can extend the lazy algorithm to handle further dimensions, but you need to handle the exceptions of the rules to achieve that.

Resources