Qubit state after this quantum circuit (Hadamard+Eigen Operator+Hadamard) - quantum-computing

Could anyone help me with this circuit? I'm trying to walk through this exercise but failed. The first register carries a single bit and the bottom carries an n-qubit state. |ψ_0〉is an eigenstate of operator U, i.e. U|ψ_0〉= e^{2πiθ}|ψ_0〉, where 0 ≤ θ < 1.
What will the probability of measuring 0 on the top register? And what is the output state |ψ_1〉be?
I got the P(0) on the first register = 1/4 (2+cosθ), which seems really strange for me. Is it correct?
Cheers!

Related

Solving a second-order differential equation with both Dirichlet and Neumann boundary conditions

I want to solve the Fourier’s law for the heat equation
of an isolated electrically heated rod:
with a Dirichlet boundary condition of
and a Neumann boundary condition of
where
x is the length coordinate
L is the length of the rod
K is the thermal conductivity of the material (assumed constant)
Q is the internal heat generation per unit length
q heat load from the left side
TL is the ambient temperature on the right side
To solve the differential equation I used the
eqn : 'diff(T, x, 2) + Q / k = 0;
sol : ode2(eqn, T, x);
giving the correct general form of
however when applying the boundary conditions using:
bc2(sol, x=0, 'diff(T, x)=-q/k, x=L, T=TL);
I get the wrong answer of
while what I expected to see was
I would appreciate it if you could help me know what is the problem and how I can resolve it.
In this specific case, because the Neumann boundary condition happened in the x = 0 I could use the
ic2(sol, x=L, T=TL, 'diff(T, x)=-q/k);
to get the correct result:

Find leading 1s in the binary number

everyone,
I am new in C and I try to understand the work with bytes, binary numbers and another important thing for the beginner.
I hope someone can push me in the right direction here.
For example, I have a 32- bits number 11000000 10101000 00000101 0000000 (3232236800).
I also assigned each part of this number to separate variables as a=11000000 (192), b = 10101000 (168), c =00000101 (5) d = 0000000 (0). I am not sure if I really need this.
Is there any way to find the last 1 in the number and use this location to calculate the number of leading 1s?
Thank you for help!
You can determine the bitposition of the first leading 1 by this formula:
floor(ln(number)/ln(2))
Where "floor()" means rounding down.
For counting the number of consecutive leading ones (if I get the second part of your question correctly) I can only imagine a loop.
Note 1:
The formula is the math formula for "logarithm of number on the base of 2".
The same works with log10(). Basically you can use any logarithm (i.e. to any base) this way to adapt to a different base.
Note 2:
It is of course questionable whether this formula is more efficient than search from MSB downwards with a loop. It could be with good FPU support. It probably is not for 8bit values. Double check in case you are out to speed optimise.
Sorry I am not a C expert, but here's the Python code I came up with:
num_ones = 0
while integer > 0:
if integer % 2 == 1:
num_ones += 1
else:
num_ones = 0
integer = integer >> 1
Basically, I count the number of continuous 1's by bit shifting the given integer.
A zero would reset the counter.

Point inside 2D axis aligned rectangle, no branches

I'm searching for the most optimized method to detect whether a point is inside an axis aligned rectangle.
The easiest solution needs 4 branches (if) which is bad for performance.
Given a segment [x0, x1], a point x is inside the segment when (x0 - x) * (x1 - x) <= 0.
In two dimensions case, you need to do it twice, so it requires two conditionals.
Consider BITWISE-ANDing the values of XMin-X, X-XMax, YMin-Y, Y-YMax and use the resulting sign bit.
Will work with both ints and floats.
I think you will need the four tests no matter what, but if you know if the point is more likely to be in or out of the rectangle, you can make sure those four tests are only run in the worst case.
If the likelihood of the point being inside is higher, you can do
if ((x>Xmax) || (x<Xmin) || (y>Ymax) || (y<Ymin)) {
// point not in rectangle
}
Otherwise, do the opposite:
if ((x<=Xmax) && (x>=Xmin) && (y<=Ymax) && (y>=Ymin)) {
// point in rectangle
}
I am curious if really there would be anything better... (unless you can make some assumption on where the rectangle edges, like they are align to power of 2s or something funky like that)
Many architectures support branchless absolute value operation. If not, it can be simulated by multiplication, or left shifting a signed value and having faith on particular "implementation dependent" behaviour.
Also it's quite possible that in Intel and ARM architectures the operation can be made branchless with
((x0<x) && (x<x1))&((y0<y) && (y<y1))
The reason is that the range check is often optimized to a sequence:
mov ebx, 1 // not needed on arm
sub eax, imm0
sub eax, imm1 // this will cause a carry only when both conditions are met
cmovc eax, ebx // movcs reg, #1 on ARM
The bitwise and between (x) and (y) expressions is also branchless.
EDIT Original idea was:
Given test range: a<=x<=b, first define the middle point. Then both sides can be tested with |(x-mid)| < A; multiplying with a factor B to have A a power of two...
(x-mid)*B < 2^n and squaring
((x-mid)*B)^2 < 2^2n
This value has only bits set at the least significant 2n bits (if the condition is satisfied). Do the same for range y and OR them. In this case the factor C must be chosen so that (y-midy)^2 scales to the same 2^2n.
return (((x-mid)*B)*(((x-mid)*B) | ((y-mid)*C)*((y-mid)*C))) >> (n*2);
The return value is 0 for x,y inside the AABB and non-zero for x,y outside.
(Here the operation is or, as one is interested in the complement of (a&&b) & (c&&d), which is (!(a&&b)) | (!(c&dd));
You don't tell us what you know about the range of possible values and resolution required, nor on what criterion you want to optimize.
A solution is to precompute a 2D array of booleans (if you can affort it) that you look-up for your pair of coordinates. Costs 1 multiply (or shift), 1 add (for address computation) and 1 memory read.
Or two 1D arrays of booleans. Costs 2 adds, two memory reads and 1 AND, with much smaller tables.

What does 'zero of the function will be found within the precision limit ϵ = 10 - 3' mean in this?

Well, the question is; "Write a C code that finds zero of a function y = ax + b, without solving the equation. The zero will be found within the precision limit ϵ = 10 - 3. You'll start at x=0, and move x in the proper direction until |y|< ϵ."
I'm a newbie, to programming, and don't know anything about this ϵ thing either.
Help me out!!
It means you have to solve the inequality |ax+b| < 10^-3 by trying different values for x.
Since this is a linear function it's easy. Start with a random number at x and then increase it or decrease it depending on the result of ax+b. I.e. if you move to one direction and the results go more away then you should follow the opposite direction.
You will have to develop an algorithm that decides the increments/decrements of x.
|y| < 10⁻³, or well, -0.001 < y < 0.001.
You must increase or decrease x (starting from 0, as you've said) in order to make y to take a value between -0.001 and 0.001.
About ϵ, a.k.a. epsilon, is used to denote a very small value. For this problem, ϵ denotes a tolerance value, as it's not needed y to take a strict value of 0.

Optimal algorithm to calculate the result of a continued fraction

A continued fraction is a series of divisions of this kind:
depth 1 1+1/s
depth 2 1+1/(1+1/s)
depth 3 1+1/(1+1/(1+1/s))
. . .
. . .
. . .
The depth is an integer, but s is a floating point number.
What would be an optimal algorithm (performance-wise) to calculate the result for such a fraction with large depth?
Hint: unroll each of these formulas using basic algebra. You will see a pattern emerge.
I'll show you the first steps so it becomes obvious:
f(2,s) = 1+1/s = (s+1)/s
f(3,s) = 1+1/f(2,s) = 1+(s/(s+1)) = (1*(s+1) + s)/(s+1) = (2*s + 1) / (s + 1)
/* You multiply the first "1" by denominator */
f(4,s) = 1+1/f(3,s) = 1+(s+1)/(2s+1) = (1*(2*s+1) + (s+1))/(2*s+1) = (3*s + 2) / (2*s + 1)
f(5,s) = 1+1/f(4,s) = 1+(2s+1)/(3s+2) = (1*(3*s+2) + (2s+1))/(3*s+2) = (5*s + 3) / (3*s + 2)
...
Hint2: if you don't see the obvious pattern emerging form the above, the most optimal algorithm would involve calculating Fibonacci numbers (so you'd need to google for optimal Fibonacci # generator).
I'd like to elaborate a bit on DVK's excellent answer. I'll stick with his notation f(d,s) to denote the sought value for depth d.
If you calculate the value f(d,s) for large d, you'll notice that the values converge as d increases.
Let φ=f(∞,s). That is, φ is the limit as d approaches infinity, and is the continued fraction fully expanded. Note that φ contains a copy of itself, so that we can write φ=1+1/φ. Multiplying both sides by φ and rearranging, we get the quadratic equation
φ2 - φ - 1 = 0
which can be solved to get
φ = (1 + √5)/2.
This is the famous golden ratio.
You'll find that f(d,s) is very close to φ as d gets large.
But wait. There's more!
As DVK pointed out, the formula for f(d,s) involves terms from the Fibonacci sequence. In particular, it involves ratios of successive terms of the Fibonacci sequence. There is a closed form expression for the nth term of the sequence, namely
(φn-(1-φ)n)/√5.
Since 1-φ is less than one, (1-φ)n gets small as n gets large, so a good approximation for the nth Fibonacci term is φn/√5. And getting back to DVK's formula, the ratio of successive terms in the Fibonacci sequence will tend to φn+1/φn = φ.
So that's a second way of getting to the fact that the continued fraction in this question evaluates to φ.
Smells like tail recursion(recursion(recursion(...))).
(In other words - loop it!)
I would start with calculating 1/s, which we will call a.
Then use a for-loop, as, if you use recursion, in C, you may experience a stack overflow.
Since this is homework I won't give much code, but, if you start with a simple loop, of 1, then keep increasing it, until you get to 4, then you can just go to n times.
Since you are always going to be dividing 1/s and division is expensive, just doing it one time will help with performance.
I expect that if you work it out that you can actually find a pattern that will help you to further optimize.
You may find an article such as this: http://www.b-list.org/weblog/2006/nov/05/programming-tips-learn-optimization-strategies/, to be helpful.
I am assuming by performance-wise you mean that you want it to be fast, regardless of memory used, btw.
You may find that if you cache the values that you calculated, at each step, that you can reuse them, rather than redoing an expensive calculation.
I personally would do 4-5 steps by hand, writing out the equations and results of each step, and see if any pattern emerges.
Update:
GCC has added tail recursion, and I never noticed it, since I try to limit recursion heavily in C, from habit. But this answer has a nice quick explanation of different optimizations gcc did based on the optimization level.
http://answers.yahoo.com/question/index?qid=20100511111152AAVHx6s

Resources