Quantum Fourier Transformation product representation - quantum-computing

I have problems understanding the following step in Nielsen's and Chuang's Quantum Computation and Quantum Information (page 218, equations 5.9 and 5.10):
Could someone help me please to understand this step? I tried to do it on an example n=3 and j=5 but could not get it to work. Somehow e^(2 * pi * i * j_k/2^k) has to be 1 if k<(n+1-l).
I tried to work with the definitions of binary fraction and the binary representation given on the same page.
Thanks a lot, please let me know if some information is missing.

I think I finally found out how it works. The exponent terms are indeed 1 for all bits of j up to position (n-l). Thus for l=1 only j_n stays whereas for l=1 to l=n-1 it holds that e^(2 * pi * i * j_k * 2^(n-k-l))=1 since either j_k is 0 or if j_k is 1, n-k-1 is positive.

Related

How to efficiently evaluate or approximate a road Clothoid?

I'm facing the problem of computing values of a clothoid in C in real-time.
First I tried using the Matlab coder to obtain auto-generated C code for the quadgk-integrator for the Fresnel formulas. This essentially works great in my test scnearios. The only issue is that it runs incredibly slow (in Matlab as well as the auto-generated code).
Another option was interpolating a data-table of the unit clothoid connecting the sample points via straight lines (linear interpolation). I gave up after I found out that for only small changes in curvature (tiny steps along the clothoid) the results were obviously degrading to lines. What a surprise...
I know that circles may be plotted using a different formula but low changes in curvature are often encountered in real-world-scenarios and 30k sampling points in between the headings 0° and 360° didn't provide enough angular resolution for my problems.
Then I tried a Taylor approximation around the R = inf point hoping that there would be significant curvatures everywhere I wanted them to be. I soon realized I couldn't use more than 4 terms (power of 15) as the polynom otherwise quickly becomes unstable (probably due to numerical inaccuracies in double precision fp-computation). Thus obviously accuracy quickly degrades for large t values. And by "large t values" I'm talking about every point on the clothoid that represents a curve of more than 90° w.r.t. the zero curvature point.
For instance when evaluating a road that goes from R=150m to R=125m while making a 90° turn I'm way outside the region of valid approximation. Instead I'm in the range of 204.5° - 294.5° whereas my Taylor limit would be at around 90° of the unit clothoid.
I'm kinda done randomly trying out things now. I mean I could just try to spend time on the dozens of papers one finds on that topic. Or I could try to improve or combine some of the methods described above. Maybe there even exists an integrate function in Matlab that is compatible with the Coder and fast enough.
This problem is so fundamental it feels to me I shouldn't have that much trouble solving it. any suggetions?
about the 4 terms in Taylor series - you should be able to use much more. total theta of 2pi is certainly doable, with doubles.
you're probably calculating each term in isolation, according to the full formula, calculating full factorial and power values. that is the reason for losing precision extremely fast.
instead, calculate the terms progressively, the next one from the previous one. Find the formula for the ratio of the next term over the previous one in the series, and use it.
For increased precision, do not calculate in theta by rather in the distance, s (to not lose the precision on scaling).
your example is an extremely flat clothoid. if I made no mistake, it goes from (25/22) pi =~ 204.545° to (36/22) pi =~ 294.545° (why not include these details in your question?). Nevertheless it should be OK. Even 2 pi = 360°, the full circle (and twice that), should pose no problem.
given: r = 150 -> 125, 90 degrees turn :
r s = A^2 = 150 s = 125 (s+x)
=> 1+(x/s) = 150/125 = 1 + 25/125 x/s = 1/5
theta = s^2/2A^2 = s^2 / (300 s) = s / 300 ; = (pi/2) * (25/11) = 204.545°
theta2 = (s+x)^2/(300 s) = (6/5)^2 s / 300 ; = (pi/2) * (36/11) = 294.545°
theta2 - theta = ( 36/25 - 1 ) s / 300 == pi/2
=> s = 300 * (pi/2) * (25/11) = 1070.99749554 x = s/5 = 214.1994991
A^2 = 150 s = 150 * 300 * (pi/2) * (25/11)
a = sqrt (2 A^2) = 300 sqrt ( (pi/2) * (25/11) ) = 566.83264608
The reference point is at r = Infinity, where theta = 0.
we have x = a INT[u=0..(s/a)] cos(u^2) d(u) where a = sqrt(2 r s) and theta = (s/a)^2. write out the Taylor series for cos, and integrate it, term-by-term, to get your Taylor approximation for x as function of distance, s, along the curve, from the 0-point. that's all.
next you have to decide with what density to calculate your points along the clothoid. you can find it from a desired tolerance value above the chord, for your minimal radius of 125. these points will thus define the approximation of the curve by line segments, drawn between the consecutive points.
I am doing my thesis in the same area right now.
My approach is the following.
at each point on your clothoid, calculate the following (change in heading / distance traveled along your clothoid), by this formula you can calculate the curvature at each point by this simple equation.
you are going to plot each curvature value, your x-axis will be the distance along the clothoid, the y axis will be the curvature. By plotting this and applying very easy linear regression algorithm (search for Peuker algorithm implementation in your language of choice)
you can easily identify where are the curve sections with value of zero (Line has no curvature), or linearly increasing or decreasing (Euler spiral CCW/CW), or constant value != 0 (arc has constant curvature across all points on it).
I hope this will help you a little bit.
You can find my code on github. I implemented some algorithms for such problems like Peuker Algorithm.

What is the advantage of linspace over the colon ":" operator?

Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?
It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.
linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.
linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map

Hello, I have a computational q. regarding combination/permutations

A brief intro. I am creating a medical software. I forget some of the computation/permutation theorems in college. Let's say I have five nerves. Median, ulnar, radial, tibial, peroneal. I can choose one, two, three, four, or all five of them in any combintation. What is the equation to find the maxmimum number of combinations I can make?
For example;
median
median + ulnar
median + ulnar + radial
etc etc
ulnar + median = median + ulnar. so those would be repetitive. Thank you for your help. I know this isn't directly programming related, but I thought you guys would be familiar.
The comment that says it is (2^n)-1 is correct. 2^n is the number of possible subsets you can form from a set of n objects (in this case you have 5 objects), and then in your case, you don't want to count the empty set, so you subtract out 1.
I'm sure you can do the math, but for the sake of completeness, for 5 nerves, there would be 2^5 - 1 = 32 - 1 = 31 possible combinations you could end up with.

Implementing Geometric Median

When I google for Geometric median, I got this link Geometric median
but I have no clue how to implement it in C . I am not very good at understanding this Mathematical Explanation. Lets Say I have 11 pair of co-ordinates how will I calculate the geometric median for the same.
I am trying to solve this problem Grid CIty. I was given a Hint that geometric median will help me achieve it. I am not looking for a final solution. If someone can guide me to a right path that would help.
Thanks is Advance
Below is the list of co-ordinates a (test case). result : 3 4
1 2
1 7
2 2
2 3
2 5
3 4
4 2
4 5
4 6
5 3
6 5
I don't think this is solvable without an iterative algorithm.
Here is a pseudocode solution similar to the hill-climbing version, except that it works to arbitrary accuracy, and in higher dimensions.
CurrentPoint = Mean(Points)
While (CurrentPoint - PreviousPoint) Length > 0.01 Do
For Each Point in Points Do
Vector = CurrentPoint - Point
Vector Length = Vector Length - 1.0
Point2 = Point + Vector
Add Point2 To Points2
Loop
PreviousPoint = CurrentPoint
CurrentPoint = Mean(Points2)
Loop
Notes:
The constant 0.01 does not guarantee the result to be within 0.01 of the true value. Use smaller values for better precision.
The constant 1.0 should be adjusted to (I'm guessing) about 1/5 the distance between the furthest points. Too small values will slow down the algorithm, but too large values will cause inaccuracies probably leading an to infinite loop.
To resolve this problem, you just have to compute the mean for each coordinate and round up the result.
It should resolve your problem.
You are not obliged to use the concept of Geometric median; so seeing that it is not easy to calculate, you better solve your problem without calculating it!
Here is an idea for an algorithm/implementation.
Start at any point (e.g. the first point in the given data).
Calculate the sum of distances for current point and the 8 neighboring points (+/-1 in each direction, x and y)
If one of the neighbors is better than current point, update the current point and start from 1
(Found the optimal distance; now choose the best point among those with equal distance)
Calculate the sum of distances for current point and the 3 neighboring points (-1 in each direction, x and y)
If one of the neighbors is the same as current point, update the current point and continue from 5
The answer is (xi, yj) where xi
is the median of all the x's and yj is the median of all the y's.
As I comment the solution to your problem is not the geometric mean, but the arithmetic mean.
If you have to calculate the arithmetic mean, you need to sum all the values of the column and divide the answer by the number of elements.

Resampling a sound sample, what filter do I use?

I am trying to resample a signal (sound sample) from one sampling rate, to a higher sampling rate.
Unfortunately it needs some kind of filter, as some 'aliasing' appears to occur, and I'm not familiar with filters. Here is what I came up with:
int i, j, a, b, z;
a = 44100;
b = 8363;
// upsample by a
for(i = z = 0; i < samplen; i++)
for(j = 0; j < a; j++)
cbuf[z++] = sampdata[i];
// some filter goes here???
// downsample by b
for(j = i = 0; i < z; i += b)
buf[j++] = cbuf[i];
The new sample is very similar to the original, but it has some kind of noise.
Can you please tell me what filter I need to add, and preferably some code related to that filter?
Original sound: http://www.mediafire.com/?9gnga1in52d6t4x
Resampled sound: http://www.mediafire.com/?x34h7ggk8n9k8z1
Don't use linear interpolation unless both sample rates (source and destination) are well above the highest frequency in your data. It's a very poor low-pass filter.
What you want is an interpolating low pass filter with a stop-band starting below half the lower of the two sample rates you are dealing with. Common methods of implementing this are upsampling/downsampling using IIR filters, and using poly-phase FIR filters. A windowed Sinc interpolator also works well for this if you don't need real-time performance, and don't want to upsample/downsample. Here's a Windowed Sinc interpolating low-pass filter in Basic, that should be trivial to convert into C.
If you want to use IIR filtering, here's the canonical Cookbook for biquad IIR filters.
If you want the best explanation of audio resampling theory, here's Stanford CCRMA's Resampling page.
Have you considered using a specialised library for this, such as libsamplerate?
It is quite portable and it is developed by people who know how to do things like this correctly. Even if you do not use it directly, you might find the algorithms it implements quite interesting.
A few comments, although I'm only guessing at your actual intent:
You are up-sampling at a rate 44100 times the original sample rate. For example, if your input was at 10kHz your intermediate cbuf[] would be at 441MHz which is a tad high for most audio analysis. Assuming you want cbuf[] to be at 44100Hz then you only need to create 44100/OrigSampleRate of samples in cbuf[] per sample in sampdata[].
You are incrementing z twice in the up-sampling loop. This results in all odd elements of cbuf[] to have their original values. I believe this ultimately results in the final buf[] having invalid odd elements which may be the source of your noise. There is also a potential buffer overflow in cbuf if you didn't create it with at least twice the required number of elements.
As mentioned by Steve a linear interpolation is generally the simplest that creates a good result when up-sampling. More complicated up-sampling can be done if desired (polynomials, splines, etc...). Similarly, when down-sampling you may wish to average samples instead of just truncating.
Best resampling code I ever come across: http://shibatch.sourceforge.net/
Take the source, and try to learn something from it. It is in nasty condition, but results of that resampler are far above everything else.
Use FFMpeg and avcodec directly. Here's a good example showing how to do this:
http://tdistler.com/projects/audio-resampling-with-ffmpeg
Before you resample to a lower sample rate you MUST low pass filter the original less than 1/2 times the sample rate or you will introduce alizing artifacts. The spectrum will fold back upon itself for frequencies more than 1/2 the sample rate. So if you want to resample to 11025 from 44100 you must filter the 44100 lowpassa at 1/2 of 11025 or 5500 Hz since faithfulness of reproduction decreases with lower bandwidths its best to do this with max amplitude like -10Db of amplitude. For 16 bits signed the value is like 10^(-10/20)*2^(16-1) or 10362 +/- for max amplitude. The exact algorithms might be found online since there should be no intellectual rights for these old and basic ideas. After doing all calculations with no rounding double precision floating point then you round the results to their proper integer values and interpolate on the time scale exactly where the one set intercepts the other. It requires quite an imagination and memory and previous experience which then puts you in the realm of the mathematician physics programmer. :-O :-)
Linear interpolation works quite well here. The issues is with the author's code, it's not linear interpolation - it's just taking the nearest value without any interpolation at all.
Here is an example of linear interpolation with source sample rate = 5 and destination sample rate = 6
src val: 5 10 5 0 5 (this is our audio data, 5 samples)
src idx: 0 1 2 3 4 (source positions for 5 samples)
dst idx: 0 1 2 3 4 5 (destination positions for 6 samples)
dst val: ? ? ? ? ? ?
At first let's calculate scale factor:
scaleCF = srcSampleRate / dstSampleRate = 0.83333334
Let's look at dst[2]
For dst index 2 we need to take part from src[1] and part from src[2]
Let's find nearest source indices and their contribution coeffitients:
idxD = (double)idx * cf; = 0.833333334 * 2 = 1.6666668
a = (int)idxD = (int)(1.6666668) = 1
b = a + 1 = 2
bCF = idxD - a = 1.6666668 - 1 = 0.6666668
aCF = 1.0 - bCF = 1.0 - 0.6666668 = 0.3333332
res = (float)(aCF * Data[a] + bCF * Data[b])
= 0.3333332 * 10 + 0.6666668 * 5 = 6.6666666
So our destination value at position 2 will be 6.6666666
Algorithm can be used for downsampling / upsampling.
Probably not the most efficient solution and not the most accurate, still easy to implement and works pretty good.

Resources