OR gate implementation using Toffoli gate - quantum-computing

Can an OR gate be implemented using not more than 2 Toffoli gates?
I have already implemented it using 3 Toffoli gates but couldn't find any way to implement it using 2 Toffoli gates.

I'm assuming you mean OR gate on two qubits, which should have the following effect:
|x₀⟩⊗|x₁⟩⊗|y⟩ → |x₀⟩⊗|x₁⟩⊗|y ⊕ (x₀ ∨ x₁)⟩
You can do it with just one Toffoli gate, using De Morgan's law x₀ ∨ x₁ = ¬ (¬x₀ ∧ ¬x₁), as follows:
Apply an X gate to each of the input qubits:
|x₀⟩⊗|x₁⟩⊗|y⟩ → |¬x₀⟩⊗|¬x₁⟩⊗|y⟩
Apply a Toffoli gate with two input qubits as controls and the output qubit as target:
|¬x₀⟩⊗|¬x₁⟩⊗|y⟩ → |¬x₀⟩⊗|¬x₁⟩⊗|y ⊕ (¬x₀ ∧ ¬x₁)⟩
Apply an X gate to each of the input qubits again to return them to their initial state:
|¬x₀⟩⊗|¬x₁⟩⊗|y ⊕ (¬x₀ ∧ ¬x₁)⟩ → |x₀⟩⊗|x₁⟩⊗|y ⊕ (¬x₀ ∧ ¬x₁)⟩
Apply an X gate to the output qubit to negate the result:
|x₀⟩⊗|x₁⟩⊗|y ⊕ (¬x₀ ∧ ¬x₁)⟩ → |x₀⟩⊗|x₁⟩⊗|y ⊕ ¬(¬x₀ ∧ ¬x₁)⟩ = |x₀⟩⊗|x₁⟩⊗|y ⊕ (x₀ ∨ x₁)⟩

I think the following answers your question as originally intended, using 2 Toffoli gates with no other gates used.
Let a Toffoli gate be represented as Toffoli(x, y, z), where x and y are the 2 control bits, and z is the third input bit.
OR(x,y) = Toffoli(1,y,Toffoli(x,y,x))
The inner Toffoli gate gives |x⊕(x ∧ y)⟩
The outer Toffoli gate (acting as XOR) produces |x⊕y⊕(x ∧ y)⟩.
You can check the truth table for this expression, you will see that it corresponds to an OR gate.

Related

Is A → B the same as B → A when speaking of functional dependencies?

This might very simple but I just had to check with you guys.
When it comes to databases, does the arrow in literature imply vise versa on equality?
Meaning, is A → B considered the SAME as B → A, in particular when it comes to databases and functional dependencies?
Please read the reference(s) you were given for FDs (functional dependencies).
A FD is an expression of the form "A → B" for sets of attributes A & B. So if A and B are different, A → B is a different FD than B → A.
For a relation value or variable R, "A → B holds in R" and "A → B in R" say that if two R tuples have the same subtuple for A then they have the same subtuple for B.
Is A → B in R equivalent to B → A in R? If A and B are the same set, then yes. But what if they aren't?
X Y
a 1
b 1
{X} → {Y} holds in that relation value. {X} <> {Y}. Does {Y} → {X} also hold?

SQL Functional dependencies

Suppose X1 → Y1 and X2 → Y2
Is it true that X1 ∩ X2 → Y1 ∩ Y2?
How about X1 ∪ X2 → Y1 ∩ Y2?
I've been thinking about this for a couple of hours and am really stuck. Maybe the second one is true because anything both in Y1 and Y2 will be dependent on at least one of X1 or X2.
The first formula is obviously false. A very simple example to show this is:
R(A,B,C,D)
A B → C D
B E → D F
from this one cannot infer that B → D in any way, and in fact the following instance satisfies the two above dependencies, but not the third one (for the same value of B, there are two different values of D):
A B C D E F
----------------------
a1 b1 c1 d1 e1 f1
a2 b1 c1 d2 e1 f1
The second formula is, on the other hand, true, and this can be proved by using the Armstrong’s Axioms.

Formally and Informally describe the language of this grammar

I have a question I would like some help with:
Formally and informally describe the language of the following grammar G = (Σ, N, S, P):
Σ = {a,b,c}
N = {S,T,X}
S = S
P = {
S->aTXc,
S->bTc,
T->aTX,
T->bT,
TXX->T,
Tc->empty string,
TXc-a>
}
Moreover, briefly and informally explain how this grammar generates its language.
Hint: Use |w|x notation to describe the language of this grammar.
I believe the language of the grammar
S → bTc | aTXc
T → bT | aTX
TXX → T
Tc → λ
TXc → a
is (a|b)+. First, consider derivations from T using the T-productions:
T ⇒* bnT (T → bT)
T ⇒* anTXn (T → aTX)
where n > 0. Since T → bT and T → aTX can be applied in arbitrary order, it follows that
T ⇒* uTX|u|a
where u has the form (a|b)+ and |u|a ≥ 0. Next, consider the production TXX → T:
T ⇒* uTX|u|a ⇒* uTX1(|u|a (mod 2) ≡ 1)
where 1(P) = 1 if P and 0 otherwise. Putting this together with the S-productions, we have:
S ⇒ bTc ⇒* buTX1(|u|a (mod 2) ≡ 1)c ⇒ buTc ⇒ bu
S ⇒ aTXc ⇒* auTX1(|u|a (mod 2) ≡ 1)Xc ⇒ auTXc ⇒ aua
if |u|a (mod 2) ≡ 0 and
S ⇒ bTc ⇒* buTX1(|u|a (mod 2) ≡ 1)c ⇒ buTXc ⇒ bua
S ⇒ aTXc ⇒* auTX1(|u|a (mod 2) ≡ 1)Xc ⇒ auTXXc ⇒ auTc ⇒ au
if |u|a (mod 2) ≡ 1 where u has the form (a|b)+. In the last step of the preceding derivations, applying more T-productions does not generate strings having a different form. Thus, I believe that all derivable strings have the form (a|b)+. My concern is that you were instructed to use |u|a notation to describe the grammar's language, so this belief might be in error.

A quantum algorithm with high probability on a 4 to 1 function

Let f : {0, 1}ⁿ → {0, 1}ⁿ be a 4-to-1 function, such that there exist
distinct and non-zero a, b ∈ {0, 1}ⁿ such that for all x ∈ {0, 1}ⁿ:
f(x) = f(x ⊕ a) = f(x ⊕ b) = f(x ⊕ a ⊕ b).
Note that ⊕ is a bit-wise xor, and that for all y ∉ {x, x ⊕ a, x ⊕ b, x ⊕ a ⊕ b}, f(y) ≠ f(x). Find
a quantum algorithm that with high probability reports the set {a, b, a ⊕ b}.
Well, choosing x to be 0 gives f(0) = f(a) = f(b) = f(a xor b). And there are no other inputs that match f(0). So we're just looking for v satisfying v != 0, f(v) = f(0). So make a circuit that takes a v and inverts phase when it satisfies else does nothing otherwise. Then apply grover's algorithm. The running time would be O(sqrt(N)) to find the first value. Then you repeat with it conditioned out as well.
On the other hand, just classically sampling at random until you find a collision also has expected time O(sqrt(N)) so there's probably something even more clever you can do.

Armstrongs Axioms Proof

I am having issues proving functional dependencies with Armstrong's Axioms. This one i'm struggling
with. Let R(A,B,C,D,E) be a relation schema and F = {A→CD, C→E, B→D} 1. Prove: F: BC-> DE
What I have:
1 Given B->D
1.
Augment C on 1, BC->DC
2.
Decomposition on 2, BC->D BC->C
3.
Transitivity on BC->C, BC->E
4.
Union on BC ->D and and 4, BC->DE
Unsure if this is a proper solution.
Also Prove: AC-> BD I don't think this can be proven.
Please Help!
your solutions are correct, apart from some apparent misspelling:
Given B->D, C->E
Augment C on 1: BC -> DC
Decomposition on 2: BC -> C (3.1), BC -> D (3.2)
Transitivity on 1, 3.1: BC -> C, C -> E: BC -> E
Union on 3.2 and 4: BC -> DE
alternatively:
B->D, C->E
augment(1.1, c): bc -> dc
augment(1.2, d): cd -> ed
trans(2, 3): bc -> de (note: bc -> dc <=> bc -> cd <=> cb -> cd <=> cb -> dc)
ac -> bd cannot be proven in general: inspecting the armstrong axioms you notice that for some X to appear on the rhs of a derived fd, it must occur on the rhs of one of the original fds, except for
X is a subset of some X' on the lhs of an original fd
or
the fd is derived by augmentation with X
1.) constitutes a constraint never mentioned.
if 2.) applies, X would appear on a lhs of the original fd too. the only way to eliminate X is by transitivity which requires X to appear on the rhs of one of the original fds.
take b as X to see that ac -> bd is unprovable.
Abbreviations:
Shorthand
Expansion
fd(s)
Functional dependency(/-cies)
lhs
Left-hand side (of an equation / a derivation )
rhs
Right-hand side

Resources