I'm working through problems in my textbook to get ready for my test, and I'm having a pretty tough time with figuring out this question.
Consider a relation
S(B,O,I,S,Q,D)
FDs: S->D, I->B, IS->Q, B->O
I need to do the BCNF decomposition, and then determine all of the keys of S.
I did BCNF decomposition and determined that IS is a superkey, but I can't figure out the rest of the decomposition to figure out the other keys.
I also need to find a minimal bases for the given FDS, and use 3NF synthesis algo to find a loseless-join dependency-preserving decomposition of S into 3NF.
Any help is much appreciated, I am beyond confused here and am really struggling with this problem.
{I S} is the only key, and this is easy to show. The attributes I and S appear only in the left parts of functional dependencies, so they must belong to any key. And since they are already a (super)key, no other key exists.
The functional dependencies are already a minimal cover (or minimal base) since: a) every right part has only one attribute; b) in the dependency IS→Q no attribute on the left part is superfluous, and c) no dependency is redundant.
So the 3NF decomposition is:
R1 < (B O),
{ B → O } >
R2 < (B I),
{ I → B } >
R3 < (I S Q),
{ I S → Q } >
R4 < (D S),
{ S → D } >
which is equal to the result of the decomposition in BCNF.
Related
I'm writing a fixpoint that requires an integer to be incremented "towards" zero at every iteration. This is too complicated for Coq to recognize as a decreasing argument automatically and I'm trying prove that my fixpoint will terminate.
I have been copying (what I believe is) an example of a well-foundedness proof for a step function on Z from the standard library. (Here)
Require Import ZArith.Zwf.
Section wf_proof_wf_inc.
Variable c : Z.
Let Z_increment (z:Z) := (z + ((Z.sgn c) * (-1)))%Z.
Lemma Zwf_wf_inc : well_founded (Zwf c).
Proof.
unfold well_founded.
intros a.
Qed.
End wf_proof_wf_inc.
which creates the following context:
c : Z
wf_inc := fun z : Z => (z + Z.sgn c * -1)%Z : Z -> Z
a : Z
============================
Acc (Zwf c) a
My question is what does this goal actually mean?
I thought that the goal I'd have to prove for this would at least involve the step function that I want to show has the "well founded" property, "Z_increment".
The most useful explanation I have looked at is this but I've never worked with the list type that it uses and it doesn't explain what is meant by terms like "accessible".
Basically, you don't need to do a well founded proof, you just need to prove that your function decreases the (natural number) abs(z). More concretely, you can implement abs (z:Z) : nat := z_to_nat (z * Z.sgn z) (with some appropriate conversion to nat) and then use this as a measure with Function, something like Function foo z {measure abs z} := ....
The well founded business is for showing relations are well-founded: the idea is that you can prove your function terminates by showing it "decreases" some well-founded relation R (think of it as <); that is, the definition of f x makes recursive subcalls f y only when R y x. For this to work R has to be well-founded, which intuitively means it has no infinitely descending chains. CPDT's general recursion chapter as a really good explanation of how this really works.
How does this relate to what you're doing? The standard library proves that, for all lower bounds c, x < y is a well-founded relation in Z if additionally its only applied to y >= c. I don't think this applies to you - instead you move towards zero, so you can just decrease abs z with the usual < relation on nats. The standard library already has a proof that this relation is well founded, and that's what Function ... {measure ...} uses.
This investigation initially began with a test question, where I asserted
R ∩ S = R ∩ (R ∩ S)
As I have researched, I have not been able to come up with much information as to the properties of intersection within relational algebra.
If this was set theory, I would assert
R ∩ (R ∩ S) => Associativity
(R ∩ R) ∩ S => Idempotence
(R) ∩ S = R ∩ S
With relational algebra, we are instead operating on bags. I believe I can take these same steps (idempotence seems trivial, and associativity is suggested on pg. 3 of http://comet.lehman.cuny.edu/stjohn/teaching/db/ullmanSlidesF00/slides7.pdf) with bags, but I cannot quite make it to a formal proof.
Could anyone assist me in asserting (or disproving, by counterexample or otherwise) associativity and idempotence of the intersection in relational algebra?
Thank you very much.
The Relational Model is formally defined over sets, so the Relational Algebra is defined only over sets. But since the Relational Databases extended the model to include multisets, or bags, the Relational Algebra has been similarly extended over multisets, and almost all the modern books on Databases describes this extension.
In particular, using the notation Opb to represent the variant of operator Op over multisets, we can define the operators ∩b, ∪b and −b as follows:
R ∩b S: if an element t appear n times in R and m times in S, then it appears min(n,m) times in R ∩b S;
R ∪b S: if an element t appear n times in R and m times in S, then it appears n+m times in R ∪b S;
R −b S: if an element t appear n times in R and m times in S, then it appears max(0,n-m) times in R −b S.
You want to show that:
R ∩b S = R ∩b (R ∩b S)
As your correct derivation shows, this can be proved if we can prove the Associativity and Idempotence properties for the operator ∩b.
Idempotence:
R ∩b R = R
In this case, since each element appear the same number of times in both the operands, that is m = n, we have that it appears min(n, n) = n times in the result, i.e. it appears the same number of times of R.
Associativity:
(R ∩b S) ∩b T = R ∩b (S ∩b T)
Suppose that a certain element t appears n times in R, m times in S, and k times in T, we have that in:
(R ∩b S) ∩b T
it will appear min(min(n,m), k) times in the result, and this is equal to min(n,m,k). On the other hand, in:
R ∩b (S ∩b T)
it will appear min(n, min(m,k)) times in the result, but this again is equal to min(n,m,k).
So this means that it will appear the same number of times in both the results, and so the two results are equal.
As philipxy has already commented, relations are sets, and therefore set theory applies. The link provided below gives you a list of transformation equivalences, and the associative property of both union and intersection are listed in there.
http://www.postgresql.org/message-id/attachment/32495/equivalenceRules.pdf
The question might be raised about how many of the relational algebra theorems are provably true in the realm of SQL databases, rather than relational algebra. This is problematic, because any given implementation might have bugs, and there could even be flaws in the SQL model. I have tended to blindly assume that the results of relational algebra carry over into the practical world, but some deep thinkers sound a note of caution in that regard.
What is the difference between Non Trivial Functional Dependencies and Completely non trivial Dependencies?
As per as my search i found a confusing difference between the two. I visited http://www.tutorialspoint.com/dbms/database_normalization.htm
According to this,
Non-trivial: If an FD X → Y holds where Y is not subset of X, then it is called non-trivial FD.
Completely non-trivial: If an FD X → Y holds where x intersect Y = Φ, is said to be completely non-trivial FD.
Being Y not subset of X and X intersect Y = Φ. Don't they point towards the same.
Example: X={1,2,3,4}, Y={5,6}
here we see Y is not subset of X and also Y intersect X = Φ.
Then, My question is what is the reason behind saying difference between non trivial dependency and Completely non Trivial Dependency? If there is any difference, then what is that exact one. Please suggest me. I googled it, but not found the satisfactory one.
Suppose the following case:
P = { R1, R2 } and Q = { R1, R3 }
Then,
P->Q is non-trivial.
P->Q isn't completely non-trivial. (because the intersection of P and Q is = { R1 }
Don't think of X and Y as sets directly.They are attributes of a table. What that means is that :
when AB->BC then it is Non-Trivial FD(Here u can see that B is common in both sides. So that doesn't happens in complete non-trivial)
i.e.
when AB->CD here AB intersection CD is none unlike in the above case.This is the basic difference between Non-trivial and Complete non-trivial.
I want to know that given a regular language L that only contains Kleene star operator (e.g (ab)*), is it possible that L can be generated by the concatenation of two non-regular languages? I try to prove that L can be only generated by the concatenation of two regular languages.
Thanks.
This statement is false. Consider these two languages over Σ = {a}:
L1 = { an | n is a power of two } ∪ { ε }
L2 = { an | n is not a power of two } ∪ { ε }
Neither of these languages are regular (the first one can be proven to be nonregular by using the Myhill-Nerode theorem, and the second is closely related to the complement of L1 and can also be proven to be nonregular.
However, I'm going to claim that L1L2 = a*. First, note that any string in the concatenation L1L2 has the form an and therefore is an element of a*. Next, take any string in a*; let it be an. If n is a power of two, then it can be formed as the concatenation of an from L1 and ε from L2. Otherwise, n isn't a power of two, and it can be formed as the concatenation of ε from L1 and an from L2. Therefore, L1L2 = a*, so the theorem you're trying to prove is false.
Hope this helps!
This should be an easy question. I'm new with Coq.
I want to define the exclusive or in Coq (which to the best of my knowledge is not predefined). The important part is to allow for multiple propositions (e.g. Xor A B C D).
I also need the two properties:
(Xor A1 A2 ... An)/\~A1 -> Xor A2... An
(Xor A1 A2 ... An)/\A1 -> ~A2/\.../\~An
I'm currently having trouble defining the function for an undefined number of variables. I tried to define it by hand for two, three, four and five variables (that's how many I need). But then proving the properties is a pain and seems very inefficient.
Given your second property, I assume that your definition of exclusive or at higher arities is “exactly one of these propositions is true” (and not “an odd number of these propositions is true” or “at least one of these propositions is true and at least one is false”, which are other possible generalizations).
This exclusive or is not an associative property. This means you can't just define higher-arity xor as xor(A1,…,An)=xor(A1,xor(A2,…)). You need a global definition, and this means that the type constructor must take a list of arguments (or some other data structure, but a list is the most obvious choice).
Inductive xor : list Prop -> Prop := …
You now have two reasonable choices: build your definition of xor inductively from first principles, or invoke a list predicate. The list predicate would be “there is a unique element in the list matching this predicate”. Since the standard list library does not define this predicate, and defining it is slightly harder than defining xor, we'll build xor inductively.
The argument is a list, so let's break down the cases:
xor of an empty list is always false;
xor of the list (cons A L) is true iff either of these two conditions is met:
A is true and none of the elements of L are true;
A is false and exactly one of the elements of L is true.
This means we need to define an auxiliary predicate on lists of propositions, nand, characterizing the lists of false propositions. There are many possibilities here: fold the /\ operator, induct by hand, or call a list predicate (again, not in the standard list library). I'll induct by hand, but folding /\ is another reasonable choice.
Require Import List.
Inductive nand : list Prop -> Prop :=
| nand_nil : nand nil
| nand_cons : forall (A:Prop) L, ~A -> nand L -> nand (A::L).
Inductive xor : list Prop -> Prop :=
| xor_t : forall (A:Prop) L, A -> nand L -> xor (A::L)
| xor_f : forall (A:Prop) L, ~A -> xor L -> xor (A::L).
Hint Constructors nand xor.
The properties you want to prove are simple corollaries of inversion properties: given a constructed type, break down the possibilities (if you have a xor, it's either a xor_t or a xor_f). Here's a manual proof of the first; the second is very similar.
Lemma xor_tail : forall A L, xor (A::L) -> ~A -> xor L.
Proof.
intros. inversion_clear H.
contradiction.
assumption.
Qed.
Another set of properties you're likely to want is the equivalences between nand and the built-in conjunction. As an example, here's a proof that nand (A::nil) is equivalent to ~A. Proving that nand (A::B::nil) is equivalent to ~A/\~B and so on are merely more of the same. In the forward direction, this is once more an inversion property (analyse the possible constructors of the nand type). In the backward direction, this is a simple application of the constructors.
Lemma nand1 : forall A, nand (A::nil) <-> ~A.
Proof.
split; intros.
inversion_clear H. assumption.
constructor. assumption. constructor.
Qed.
You're also likely to need substitution and rearrangement properties at some point. Here are a few key lemmas that you may want to prove (these shouldn't be very difficult, just induct on the right stuff):
forall A1 B2 L, (A1<->A2) -> (xor (A1::L) <-> xor (A2::L))
forall K L1 L2, (xor L1 <-> xor L2) -> (xor (K++L1) <-> xor (K++L2))
forall K A B L, xor (K++A::B::L) <-> xor (K::B::A::L)
forall K L M N, xor (K++L++M++N) <-> xor (K++M++L++N)
Well, I suggest you start with Xor for 2 arguments and prove its properties.
Then if you want to generalize it you can define Xor taking a list of arguments -- you should
be able to define it and prove its properties using your 2-argument Xor.
I could give some more details but I think it's more fun to do it on your own, let me know how it goes :).