We have 4 numbers x = [x1, x2, x3, x4]
We want to prepare quantum state Psi somehow to encode x
We want to make QFT on Psi to get Phi = QFT (Psi)
For QFT, the numbers on which the transform is applied are encoded as amplitudes of the basis states: |x⟩ = ∑ xₖ |k⟩.
In your case you'd use a 2-qubit state and use amplitudes x₀ ... x₃, normalized. Then you'd prepare a state x₀ |00⟩ + x₁ |10⟩ + x₂ |01⟩ + x₃ |10⟩ (assuming little-endian encoding of the basis states). If you're implementing this using some quantum programming language, there's likely to be a library to do that for you - for example, in Q# it's PrepareArbitraryState.
Related
I'm trying to use brms package to run a model where my dependent variable Y is an estimate of the latent variable (Disease=0 absent or Disease=1 present with a probability p).
I've a dataframe bd which contains a dichotomous variable Y (result of a test either positive 1 or negative 0 for assessing the disease status), and 3 covariates (X1 numeric, X2 and X3 as factors)
Y~q #where q is a Bernoulli event and q depends on the false positive and false negative fraction of the test.
q~ pSe +(1-p)(1-Sp) # true positive and false positive depending on the true probability of disease p
The model I want to finally obtain is on the form:
logit(p)~ X1 + X2 + X3 #I want to determine the impacts of my Xi's on the latent variable p
I used brms package with a non linear formula but struggle with specific problems
bform <- bf(
Y~q, #defining my Bernoulli event
nlf(q~ Se * p+(1-p) * (1-Sp)),
nlf(p~inv_logit(X1 + X2 + X3)),
Se+Sp~1,
nl=TRUE,
family=bernoulli("identity"))
I put some priors on the test sensitivity ans specificity using beta priors but letting default priors for logistic regression coefficients => bprior
bprior <- set_prior("beta(4.6, 0.86)", nlpar="Se", lb=0, ub=1)+
set_prior("beta(77.55, 4.4)", nlpar="Sp", lb=0, ub=1)
My final model looks like (using the previously created list bform and bprior ):
brm(bform, data=bd, prior=bprior, init="0")
When running the model I only get posteriors from the Se and Sp parameters but I am not able to see any coefficient associated with my covariables X1, X2, X3.
I guess my model has a mistake but I'm not able to see what's happen.
Any help would be greatly appreciated!!!
I expected to get output from the line code p~inv_logit(X1 + X2 + X3)) to be able to determine coefficients associated with this logistic regression (accounting for imperfect dependent variable estimation)
it is my first time posting but I'll start by apologizing in advance if this question has been asked before.
I have been struggling on how to implement a 3rd order polynomial formula in C because of either extremely small values or larger than 32bit results (on a 16bit MCU).
I use diffrent values but as an example I would like to compute for "Y" in formula:
Y = ax^3 + bx^2 + cx + d = 0.00000012*(1024^3) + 0.000034*(1024^2) + 0.056*(1024) + 789.10
I need to use a base32 to get a meaningful value for "a" = 515
If I multiply 1024^3 (10bit ADC) then I get a very large amount of 1,073,741,824
I tried splitting them up into "terms A, B, C, and D" but I am not sure how to merge them together because of different resolution of each term and limitation of my 16bit MCU:
u16_TermA = fnBase32(0.00000012) * AdcMax * AdcMax * AdcMax;
u16_TermB = fnBase24(0.000034) * AdcMax * AdcMax;
u16_TermC = fnBase16(0.056) * AdcMax;
u16_TermD = fnBase04(789.10);
u16_Y = u16_TermA + u16_TermB + u16_TermC + u16_TermD;
/* AdcMax is a variable 0-1024; u16_Y needs to be 16bit */
I'd appreciate any help on the matter and on how best to implement this style of computations in C.
Cheers and thanks in advance!
One step toward improvement:
ax^3 + bx^2 + cx + d --> ((a*x + b)*x + c)*x + d
It is numerically more stable and tends to provide more accurate answers near the zeros of the function and less likely to overflow intermediate calculations.
2nd idea; consider scaling the co-efficents if they maintain their approximate relative values as given on the question.
N = 1024; // Some power of 2
aa = a*N*N*N
bb = b*N*N
cc = c*N
y = ((aa*x/N + bb)*x/N + cc)*x/N + d
where /N is done quickly with a shift.
With a judicious selection of N (maybe 2**14 for high precision avoid 32-bit overflow), then entire code might be satisfactorily done using only integer math.
As aa*x/N is just a*x*N*N, I think a scale of 2**16 works well.
Third idea:
In addition to scaling, often such cubic equations can be re-written as
// alpha is a power of 2
y = (x-root1)*(x-root2)*(x-root3)*scale/alpha
Rather than a,b,c, use the roots of the equation. This is very satisfactory if the genesis of the equation was some sort of curve fitting.
Unfortunately, OP's equation roots has a complex root pair.
x1 = -1885.50539
x2 = 801.08603 + i * 1686.95936
x3 = 801.08603 - i * 1686.95936
... in which case code could use
B = -(x1 + x2);
C = x1 * x2;
y = (x-x1)*(x*x + B*x + C)*scale/alpha
By definition, the gate 1/sqrt(5) (I + 2iZ) should act on a qubit a|0> + b|1> to transform it into 1/sqrt(5) ((1+2i)a|0> + (1-2i)b|1>) but transformations of each RUS step does the following-
The ancillas are in |+> state at first
Starting form: 1/sqrt(2) (a,b,a,b,a,b,a,b)
CCNOT(ancillas, input): 1/sqrt(2) (a,b,a,b,a,b,b,a)
S(input): 1/sqrt(2) (a,ib,a,ib,a,ib,b,ia)
CCNOT(ancillas, input): 1/sqrt(2) (a,ib,a,ib,a,ib,ia,b)
Z(input) : 1/sqrt(2) (a,-ib,a,-ib,a,-ib,ia,-b)
Now measuring the ancillas in PauliX basis is equivalent to PauliZ measurement after applying H() to the state. Now I have 2 confusions, should I apply H x H x I or H x H x H to the combined state. Also neither of these transformations turn out to be equivalent to the V-gate defined in the first paragraph when both measurements are Zero. Where did I go wrong?
Reference: https://github.com/microsoft/Quantum/blob/master/samples/diagnostics/unit-testing/RepeatUntilSuccessCircuits.qs (1st sample code)
The transformation is correct, though it takes some time with pen and paper to verify it.
As a side note, we start with a state |+>|+>(a|0> + b|1>), which is 0.5 (a,b,a,b,a,b,a,b) in vector form (both |+> states contribute a 1/sqrt(2) to the coefficients). It will not affect our calculations of the state after the measurement, since it will have to be renormalized, but it's still worth noting.
After a sequence of CCNOT, S, CCNOT, Z we get 0.5 (a,-ib,a,-ib,a,-ib,ia,-b). Since we're measuring only the first two qubits in PauliX basis, we need to apply Hadamards only to the first two qubits, or H x H x I to the combined state.
I'll take the liberty to skip writing out the whole expression after applying Hadamards and fast-forward to the results of measurements, and here is why. We're only interested in the state of the input qubit if both measurements yielded 0, so it's sufficient to gather only the terms of the overall state which have |00> as the state of the first two qubits.
The state of the third qubit after measuring |00> on the first qubit will be: (3+i)a |0> - (3i+1)b |1>, multiplied by some normalization coefficient c.
c = 1/sqrt(|3+i|^2 + |3i+1|^2) = 1/sqrt(10)).
Now we need to check whether the state we got, |S_actual> = 1/sqrt(10) ((3+i)a |0> - (3i+1)b |1>)
is the same state as we'd expect to get from applying the V gate,
|S_expected> = 1/sqrt(5) ((1+2i)a |0> + (1-2i)b |1>). They do not look the same, but remember that in quantum computing the states are defined up to a global phase. Thus, if we can find a complex number p with an absolute value 1 for which |S_actual> = p * |S_expected>, the states will be effectively the same.
This translates into the following equations for p and amplitudes of |0> and |1>: (3+i)/sqrt(2) = p (1+2i) and -(3i+1)/sqrt(2) = p (1-2i). We solve both equations to get p = (1-i)/sqrt(2) which has indeed the absolute value 1.
Thus, we can conclude that indeed the state we got after all the transformations is indeed equivalent to the state we'd get by applying a V gate.
Situation:
I was trying to compare two signal vectors (y1 & y2 with time vectors x1 & x2) with different lengths (len(y1)=1000>len(y2)=800). For this, I followed the main piece of advice given hardly everywhere: to use interp1 or spline. In order to 'expand' y2 towards y1 in number of samples through an interpolation.
So I want:
length(y1)=length(y2_interp)
However, in these functions you have to give the points 'x' where to interpolate (xq), so I generate a vector with the resampled points I want to compute:
xq = x2(1):(length(x2))/length(x1):x2(length(x2));
y2_interp = interp1(x2,y2,xq,'spline'); % or spline method directly
RMS = rms(y1-y2_interp)
The problem:
When I resample the x vector in 'xq' variable, as the faction of lengths is not an integer it gives me not the same length for 'y2_interp' as 'y1'. I cannot round it for the same problem.
I tried interpolate using the 'resample' function:
y2_interp=resample(y2,length(y1),length(y2),n);
But I get an aliasing problem and I want to avoid filters if possible. And if n=0 (no filters) I get some sampling problems and more RMS.
The two vectors are quite long, so my misalignment is just of 2 or 3 points.
What I'm looking for:
I would like to find a way of interpolating one vector but having as a reference the length of another one, and not the points where I want to interpolate.
I hope I have explained it well... Maybe I have some misconception. It's more than i'm curious about any possible idea.
Thanks!!
The function you are looking for here is linspace
To get an evenly spaced vector xq with the same endpoints as x2 but the same length as x1:
xq = (x2(1),x2(end),length(x1));
It is not sufficient to interpolate y2 to get the right number of samples, the samples should be at locations corresponding to samples of y1.
Thus, you want to interpolate y2 at the x-coordinates where you have samples for y1, which is given by x1:
y2_interp = interp1(x2,y2,x1,'spline');
RMS = rms(y1-y2_interp)
I have a state |Q> of n bits and want to measure the bit number i. Is there a matrix to apply on the state, so the state Q ends up to Q', like the Hadamard or X gates?
Or I should apply the measurement matrix |x><x| based on the outcome of the measurement, if 0 then x=0, and if 1 then x=1?
Although we often represent measurement as an operation that applies to a single qubit, it doesn't act like other single-qubit operations. There are some details omitted.
Equivalence w/ CNOT
Measuring a qubit is equivalent to using it as the control for a CNOT that toggles an otherwise unused ancilla qubit. Knowing this equivalence is useful, because it lets you translate what you know about two-qubit unitary operations into facts about measurement.
Here's a circuit showing that a qubit rotated around the Y axis ends up in the same mixed state when you measure as it does when you CNOT-onto-ancilla. The green circle things are Bloch sphere representations of each qubit's marginal state:
(If you want to use this CNOT trick to compute the mixed state result, instead of a pure state, just represent the state as a density matrix then trace over the ancilla qubit after performing the CNOT.)
Basically, measurement is observationally indistinguishable from making entangled copies. The difference, in practical terms, is that measurement is thermodynamically irreversible whereas a CNOT is easy to reverse.
Expected Outcomes
If you ignore the measurement result, then measurement acts like a projection of the density matrix. For example, in the animation above, notice that measurement causes the state to snap to (be projected onto) the Z axis of the Bloch sphere.
If you have access to the measurement result, then the measurement not only projects but also informs you of the new state of the system. In the single-qubit-in-the-computational-basis case, this forces the qubit to be all-ON or all-OFF due to the quantization of spin.
Representation
Measurements can be represented in various ways.
A very common representation is "projective measurements". Projective measurements are represented by a Hermitian matrix (called the "observable"). The eigenvalues of the matrix are the possible results. You get the probability of each result by projecting your state's density matrix into each eigenspace and tracing.
A more flexible and arguably better representation is positive-operator valued measures (POVM measurements). POVMs are represented by a set of squared Hermitian matrices, with the condition that the sum of the set's matrices must be the identity matrix. The probability of the result corresponding to the squared matrix F from the set is the trace of the state's density matrix times F.
Translating a projective measurement into a circuit that performs that measurement (using only computational basis measurements) is straightforward, because the necessary basis change operation is just a unitary matrix whose rows are the eigenvectors of the observable. Translating POVM measurements is trickier, and requires introducing ancilla bits.
For more information, see this answer on the physics stackexchange.
The measurement works as follows:
if you want to measure qubit number i (indexing from 1 to n), then based on the probability associated with all states, the outcome of measuring qubit i is 0 or 1 randomly with higher chance for the higher probability.
P_i(0) = <Q| M'0 M0 |Q>
P_i(1) = <Q| M'1 M1 |Q>
where P_i(0) is the probability of measuring qubit i to be 0, and P_i(1) is the probability of being 1. M0 is the measurment matrix of 0, and M1 is for 1. M'0 is M0 hermitian, and M'1 is M1 hermitian.
if you want to measure only the i-th qubit of the quantum system which is in state |Q> of n qubits. then the operation you would apply is:
I x I x I x I x ... x I x Mb x I x ... x I } n kronecker multiplication
1 2 3 4 ... i-1 i i+1 ... n } indices
where I is the identity matrix, Mb is the measurement matrix based on the measured value of the i-th either b=0, or b=1. x is the kronecker multiplication.
Summary:
pre measurement state |Q>
measurement of qubit i = b (b = 1 or 0 randomly selected based on the probability of each)
if b is 0: Mb = M0 = |0><0|
if b is 1: Mb = M1 = |1><1|
M = I x I x I x ... x I x Mb x I x ... x I
post state |Q'> = M|Q>