Given the 2x2 density matrix of a qubit, how do I compute the point on the Bloch sphere that represents the qubit?
For example, the state |0⟩-|1⟩ has a density matrix of [[0.5,-0.5],[-0.5,0.5]] and should end up along the X axis. But the density matrix [[0.5, 0], [0, 0.5]] isn't biased in any direction and should end up at the origin.
The conversion depends on a couple arbitrary choices:
Do you want |0⟩ at the top or the bottom?
Do you want the coordinate system to be right-handed or left-handed?
Assuming you answer those with "at the bottom" and "right-handed", then this method will do it:
def toBloch(matrix):
[[a, b], [c, d]] = matrix
x = complex(c + b).real
y = complex(c - b).imag
z = complex(d - a).real
return x, y, z
You switch to other choices by picking and choosing which outputs to negate.
Testing it out:
print(toBloch([[1, 0],
[0, 0]])) #Off, Z=-1
# (0.0, 0.0, -1.0)
print(toBloch([[0, 0],
[0, 1]])) #On, Z=+1
# (0.0, 0.0, 1.0)
print(toBloch([[0.5, 0.5],
[0.5, 0.5]])) #On+Off, X=-1
# (-1.0, 0.0, 0.0)
print(toBloch([[0.5, 0.5j],
[-0.5j, 0.5]])) #On+iOff, Y=-1
# (0.0, -1.0, 0.0)
print(toBloch([[0.5, 0.0],
[0.0, 0.5]])) #maximally mixed state, X=Y=Z=0
# (0.0, 0.0, 0.0)
Related
Let's assume we have a list like the following
[2.3, 1.02, 1.99, 0.99, 0.089, 0, 1.1, -1.1, -2.1]
We want to arrange the elements of this list based on their distance from target value equal to 1 in the following manner:
[0.99, 1.02, 1.1, 0.089, 1.99, 0, 2.3, -1.1, -2.1]
How to do that in python in one or two lines?
python solution
Use sorted with the absolute distance to target as key:
L = [2.3, 1.02, 1.99, 0.99, 0.089, 0, 1.1, -1.1, -2.1]
target = 1
out = sorted(L, key=lambda x: abs(x-target))
output: [0.99, 1.02, 1.1, 0.089, 1.99, 0.0, 2.3, -1.1, -2.1]
numpy solution
Compute the absolute distance and use numpy.argsort:
L = [2.3, 1.02, 1.99, 0.99, 0.089, 0, 1.1, -1.1, -2.1]
target = 1
import numpy as np
a = np.array(L)
out = a[np.argsort(abs(a-target))].tolist()
output: [0.99, 1.02, 1.1, 0.089, 1.99, 0.0, 2.3, -1.1, -2.1]
I have an array of sorted numbers:
arr = [-0.1, 0.0, 0.5, 0.8, 1.2]
I want the difference (dist below) between consecutive numbers for that array to be above a given threshold. For example, if threshold is 0.25:
dist = [0.1, 0.5, 0.3, 0.4] # must be >0.25 for all elements
arr[0] and arr[1] are too close to each other, so one of them must be modified. In this case the desired array would be:
good_array = [-0.25, 0.0, 0.5, 0.8, 1.2] # all elements distance > threshold
In order to obtain good_array, I want to modify the minimum amount of elements in arr. So I substract 0.15 from arr[0] rather than, say, substract 0.1 from arr[0] and add 0.05 to arr[1]:
[-0.2, 0.05, 0.5, 0.8, 1.2]
Previous array is also valid, but we have modified 2 elements rather than one.
Also, in case it is possible to generate good_array by modifying different elements in arr, by default modify the element closer to the edge of the array. But keep in mind the main goal is to generate good_array by modifying the minimum number of elemtns in arr.
[-0.1, 0.15, 0.5, 0.8, 1.2]
Previous array is also valid, but we have modified arr[1] rather than the element closer to the edge (arr[0]). In case 2 elements have equal distance from edges, modify the one closer to begining of array:
[-0.3, 0.15, 0.2, 0.7] # modify arr[1] rather than arr[2]
So far I have been doing this manually for small arrays, but I would like a general solution for larger arrays.
Here is brute force python solution, where we try to fix elements to the right or elements to the left when there is a conflict:
def solve(arr, thereshold):
original = list(arr)
def solve(idx):
if idx + 1 >= len(arr):
return [sum(1 for x in range(len(arr)) if arr[x] != original[x]), list(arr)];
if arr[idx + 1] - arr[idx] < thereshold:
copy = list(arr)
leftCost = 0
while idx - leftCost >= 0 and arr[idx + 1] - arr[idx - leftCost] < thereshold * (leftCost + 1):
arr[idx - leftCost] = arr[idx - leftCost + 1] - thereshold
leftCost += 1
left = solve(idx + 1)
for cost in range(leftCost):
arr[idx - cost] = copy[idx - cost]
rightCost = 0
while idx + rightCost + 1 < len(arr) and arr[idx + rightCost + 1] - arr[idx] < thereshold * (rightCost + 1):
arr[idx + rightCost + 1] = arr[idx + rightCost ] + thereshold
rightCost += 1
right = solve(idx + 1)
for cost in range(rightCost):
arr[idx + cost + 1] = copy[idx + cost + 1]
if right[0] < left[0]:
return right
elif left[0] < right[0]:
return left
else:
return left if idx - left[0] <= len(arr) - idx - right[0] else right
else:
return solve(idx + 1)
return solve(0)
print(solve([0,0.26,0.63,0.7,1.2], 0.25))
Edit: I just realized that my original solution was stupid and overcomplicated. Now presenting simple and better solution
First approach
If I understand your problem correctly, your input array can have some regions, where your condition is not met. For instance:
array = [0.0, 0.0, 0.0, 0.0, 0.0, 0.25, 0.5, 0.75, 1.0] (first 4 elements)
or:
array = [0.25, 0.5, 0.75, 1.0, 1.0, 1.0, 1.0, 1.0, 1.25, 1.5, 1.75] (elements arr[4], arr[5] and arr[6])
To fix that, you have to add (or subtract) some pattern like:
fixup = [0.0, 0.25, 0.0, 0.25, 0.0, 0.0, 0.0, 0.0, 0.0] (for the first case)
or:
fixup = [0.0, 0.0, 0.0, 0.0, 0.25, 0.0, 0.25, 0.0, 0.0, 0.0, 0.0] (for the second example)
Second approach
But our current solution has got some problem. Consider a bad area with an "elevation":
array = [0.0, 0.25, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.35, 1.6] (broken area is within values: 0.6-1.0)
In that case our correct "solution" will be:
fixup = [0.0, 0.0, 0.0, 0.25+0.1, 0.0, 0.25+0.1, 0.0, 0.25+0.1, 0.0, 0.0, 0.0]
which produce:
good_array = [0.0, 0.25, 0.5, 0.95, 0.7, 1.15, 0.9, 1.0, 1.1, 1.35, 1.6]
So to summarize, you have to apply the "patch":
fixup[i] = threshold+max(difference[i], difference[i-1]) (for i when i-start_index is even)
(please note that it will be -threshold+min(difference[i], difference[i-1]) for negative values)
and:
fixup[i] = 0 (for i when i-start_index is odd)
start_index is a beginning of the bad region.
Third approach
Previously mentioned formula doesn't work well for some cases (like [0.1, 0.3, 0.4] that it would increment 0.3 up to 0.75 when only 0.65 is sufficient)
Lets try to improve that:
good_array[i] = max(threshold+array[i-1], threshold+array[i+1]) (for abs(array[i-1]-array[i+1]) < threshold*2)
and:
good_array[i] = (array[i-1]+array[i+1])/2 otherwise.
(you can also choose formula: good_array[i] = min(-threshold+array[i-1], -threshold+array[i+1]) when it would produce a result closer to original array value, if minimizing difference is also your optimization goal)
4th approach
Bad regions of even length are also a threat. I can think about 2 ways to solve it:
Solution based on a pattern like [0.0, 0.25, 0.5, 0.0]
Or based on a pattern like [0.0, 0.25, -0.25, 0.0] (We are simply using "the second formula")
Or [0.0, 0.25, 0.0, 0.25] (just including additional element to make bad region length odd -I don't recommend this approach as it would require handling lot of corner cases)
Corner cases
Please consider also some corner cases (bad region starts or ends at an "edge" of the array):
good_array[0] = threshold+array[1]
and:
good_array[array_size-1] = threshold+array[array_size-2]
Final hints
I would suggest to implement lot of unit tests during implementation in order to easily verify correctness of derived formulas and handle some combinations of corner cases. Bad areas that consist of only one element can be one of them.
In F# there is a shorthand for creating an array of numbers. For example, the code
[1..10]
will create an array containing {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}.
Or
[-2..2]
will create {-2, -1, 0, 1, 2}.
Is there any related shorthand for creating an array in F# with a floating-point step? For example, an array like {-2.0, -1.5, -1.0, -0.5, 0, 0.5, 1.0, 1.5, 2} where the step is 0.5? Or is using a for or while loop the only way?
Yes there is.
[-2.0 .. 0.5 .. 2.0]
This creates
[-2.0; -1.5; -1.0; -0.5; 0.0; 0.5; 1.0; 1.5; 2.0]
Documentation: https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/loops-for-in-expression
given an array of values:
v = np.random.randn(100)
what's the fastest way to compute the percentile of each element in the array? the following is slow:
%timeit map(lambda e: scipy.stats.percentileofscore(v, e), v)
100 loops, best of 3: 5.1 ms per loop
You could use scipy.stats.rankdata() to achieve the same result:
In [58]: v = np.random.randn(10)
In [59]: print(list(map(lambda e: scipy.stats.percentileofscore(v, e), v)))
[30.0, 40.0, 50.0, 90.0, 20.0, 60.0, 10.0, 70.0, 80.0, 100.0]
In [60]: from scipy.stats import rankdata
In [61]: rankdata(v)*100/len(v)
Out[61]: array([ 30., 40., 50., 90., 20., 60., 10., 70., 80., 100.])
I am working in python. I have an angle quantity for which I want a varying step size for the array instead of a uniform grid that can be created like np.linspace(0, pi, 100) for 100 equal steps. Instead, I want more 'resolution' (i.e. a smaller step-size) for values close to 0 and pi, with larger step sizes closer to pi/2 radians. Is there a simple way to implement this in python using a technique already provided in numpy or otherwise?
Here's how to use np.r_ to construct a array with closer spacing at the ends, and wider in the middle:
In [578]: x=np.r_[0:.09:10j, .1:.9:11j, .91:1:10j]
In [579]: x
Out[579]:
array([ 0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08,
0.09, 0.1 , 0.18, 0.26, 0.34, 0.42, 0.5 , 0.58, 0.66,
0.74, 0.82, 0.9 , 0.91, 0.92, 0.93, 0.94, 0.95, 0.96,
0.97, 0.98, 0.99, 1. ])
then scale x with np.pi.
This is the kind of thing that np.r_ was created for. Not that it's doing anything special. It's doing the same as:
np.concatenate([np.linspace(0,.09,10),
np.linspace(.1,.9,11),
np.linspace(.91,1,10)])
For a smoother gradation in spacing, I'd try mapping a single linspace with a curve.
In [606]: x=np.arctan(np.linspace(-10,10,10))
In [607]: x -= x[0]
In [608]: x /= x[-1]
In [609]: x
Out[609]:
array([ 0. , 0.00958491, 0.02665448, 0.06518406, 0.21519086,
0.78480914, 0.93481594, 0.97334552, 0.99041509, 1. ])