I was going through the conditions of minimum cover of a set of function dependencies.
Here, it is mentioned that the right hand side can have only single attribute. So {A1A2 → B1B2} is not possible. It should be split as {A1A2 → B1, A1A2 → B2}.
But in DBMS by Korth, the following condition is there
Each left side of a functional dependency in Fc is unique. That is, there are no
two dependencies A1 → B1 and A2 → B2 in Fc such that A1 = A2.
So, according to this {A1A2 → B1, A1A2 → B2} is not possible. The dependencies should be combined as {A1A2 → B1B2} to avoid repetition.
Please clarify which is correct.
This seems to me to be a difference in notation, and nothing more. These two sets of FDs are equivalent.
{A1A2 → B1}
{A1A2 → B2}
{A1A2 → B1B2}
Most of the automated tools I've used express the minimum cover as you see in the first set. Your text seems to prefer the second set.
The two different expressions have no effect on reducibility or coverage or closure, which are the real issues in computing a minimum cover. You could argue that the first version, which has no more than one non-prime attribute on the right-hand side, is better because it's closer to a decomposition in 6NF.
But you should use the version your text and your professor require, keeping in mind that it's a false requirement. It's false in the sense that changing the notation from the second version to the first has no effect on whether you've actually found the minimum cover, and it has no substantial effect on the work you need to do to compute the minimum cover.
Related
Given these functional dependencies for
R: {A,B,C,D,E,F}
AC->EF
E->CD
C->ADEF
BDF->ACD
I got this as the canonical cover:
E->C
C->ADEF
BF->C
And then broke it down to Boyce Codd Normal Form:
Relation 1: {C,A,D,E,F}
Relation 2: {B,F,C}
I figured that this is lossless and dependency preserving? But is this true, since from the original functional dependencies BDF->ACD is no longer in any of my relations. But if I go from my calculated canonical cover then all my functional dependencies are preserved.
So that question is: Is this decomposition to BCNF dependency preserving?
A decomposition preserves the dependencies if and only if the union of the projection of the dependencies on the decomposed relations is a cover of the dependencies of the relation.
So, to know if a decomposition preserves or not the dependencies it is not sufficient to check if the dependencies of a particular cover have been preserved or not (for instance by looking if some decomposed relation has all the attributes of the dependency). For instance, in a relation R(ABC) with a cover F = {A→B, B→C, C→A} one could think that in the decomposition R1(AB) and R2(BC) the dependency C→A is not preserved. But if you project F on AB you obtain A→B, B→A, projecting it on BC you obtain B→C, C→B, so from their union you can derive also C→A.
The check is not simple, even if there exists polynomial algorithms that perform this task (for instance, one is described in J. Ullman, Principles of Database Systems, Computer Science Press, 1983).
Assuming the dependencies that you have given form a cover of the dependencies of the relation, the canonical cover that you have found is incorrect. In fact BF -> C cannot be derived from the original dependencies.
For this reason, your decomposition is not correct, since R2(BCF) is not in BCNF (actually, it is not in 2NF).
One possible canonical cover of R is the following:
BDF → C
C → A
C → E
C → F
E → C
E → D
Following the analysis algorithm, there are two possible decompositions in BCNF (according to the dependencies chosen for elimination). One is:
R1 = (ACDEF)
R2 = (BC)
while the other is:
R1 = (ACDEF)
R3 = (BE)
(note that BC and BE are candidate keys of the original relation, together with BDF).
A cover of the dependencies in R1 is:
C → A
C → E
C → F
E → C
E → D
while both in R2 and R3 no non-trivial dependencies hold.
From this, we can conclude that both decompositions do not preserve the dependencies; for instance the following dependency (and all those derived from it) cannot be obtained:
BDF → C
I'm taking a class on databases and I'm doing an assignment on functional dependencies. As an example of taking given dependencies and deriving other non-trivial dependencies using Armstrong's Axioms, the TA wrote this and I can't wrap my head around it.
Considering the relation R(c,p,h,s,e,n) and F the set of functional dependencies {1. c->p, 2. hs->c, 3. hp->s, 4. ce->n, 5. he->s}:
Iteration 1:
From F, we can build F1
6. hs->p (transitivity: 1+2)
7. hc->s (pseudo-transitive. 1+3)
8. hp->c
1. hp->hs (reflexivity 3)
2. hp->c (transitivity: 8.1+2)
9. he->c
1. he->hs (reflexivity: 4)
2. he->c (transitivity: 9.1+2)
I understand most of it fine except the cases where 'reflexivity' is used (using quotes because that's pretty far from the definition of reflexivity in my textbook). Can anyone tell me how that's reflexivity? Also, how do I know when an iteration is over? Couldn't you find an infinity of ways to rewrite functional dependencies?
These are the classical Armstrong’s Axioms (see for instance wikipedia):
Reflexivity: If Y ⊆ X then X → Y
Augmentation: If X → Y then XZ → YZ for any Z
Transitivity: If X → Y and Y → Z, then X → Z
So in your example, to derive hp → c you can procede in the following way:
1. hp → s (given)
2. hp → hs (by augmentation of 1 adding h)
3. hs → c (given)
4. hp → c (by transitivity of 2 + 3)
Note that to produce hp → hs from hp → s the axiom to use is Augmentation, in which the role of Z is taken by h, and not Reflexivity, and this is the axiom to use also to derive he → c (by Reflexivity you can only derive, for instance, hp → hp, hp → p, hp → h).
You are also asking:
How do I know when an iteration is over? Couldn't you find an infinity of ways to rewrite functional dependencies?
The Armstrong’s Axioms can be applied to a set of functional dependencies only a finite number of times to produce new functional dependencies. This is simple to show since the number of attributes is finite, and, given n attributes you can have at most 2n * 2n different functional dependencies (since you can have any subset of the attributes both on the left and the right part, of course including the trivial dependencies).
A name doesn't tell you anything except what somebody decided to call something.
Trivial FD X -> X holds in any relation with the attributes in X. Ie a set of attributes in a relation functionally determines itself. That's reasonably called reflexive. It happens that it functionally determines every subset of itself. It happens "reflexivity" was chosen as the name for the more general rule & the more general rule was chosen as one of a set of sufficient but non-redundant rules.
Armstrong's axioms have been shown to be sound & complete. Sound means they only generate implied FDs. Complete means that if you keep applying an axiom until you don't get any new FDs by applying any of them then you get all the FDs that can be derived from the original set, ie that also must hold when the original ones hold. Any textbook tells you that you can generate such a transitive closure of a set of FDs by doing just that.
There are also sound & complete axiom sets for FDs + MVDs. But there aren't for FDs + JDs.
Given the set of dependencies AB->C, BD->EF, AD->GH, A->I, H->J.
How would you find a minimal cover? By applying a process described in a book I get: AB->C, A->I, BD->EF, AD->GH, H->J instead of AB->CI, BD->EF, AD->GHIJ. Is it possible to combine AB->C and A->I into AB->CI and get rid of A->I?
The functional dependencies of a minimal cover of a set of dependencies F must satisfy four conditions:
They must be a cover of F (of course!)
Each right part has only one attribute
Each left part must not have extraneous attributes (that is attributes such that original dependency can be derived even if we remove them)
No dependency of the cover is redundant (i.e. can be derived from the remaining dependencies).
So this means that a minimal cover of the example is (note that there can be more then one minimal cover, that is set satisfying the above conditions):
{ AB → C
AD → G
AD → H
A → I
BD → E
BD → F
H → J }
Of course to this set you can apply the Armstrong’s axioms to derive many other dependencies (for instance AD → GH, AB → CI, AD → GHIJ, ABD → EJ, etc.) but these are not part of any minimal cover of F (that is, they do not satisfy the above definition).
Hey all I have an assignment that says:
Let R(ABCD) be a relation with functional dependencies
A → B, C → D, AD → C, BC → A
Which of the following is a lossless-join decomposition of R into Boyce-Codd Normal Form (BCNF)?
I have been researching and watching videos on youtube and I cannot seem to find how to start this. I think I'm supposed to break it down to subschemas and then fill out a table to find which one is lossless, but I'm having trouble getting started with that. Any help would be appreciated!
Your question
Which of the following is a lossless-join decomposition of R into
Boyce-Codd Normal Form (BCNF)?
suggests that you have a set of options and you have to choose which one of those is a lossless decomposition but since you have not mentioned the options I would first (PART A) decompose the relation into BCNF ( first to 3NF then BCNF ) and then (PART B) illustrate how to check whether this given decomposition is a lossless-join decomposition or not. If you are just interested in knowing how to check whether a given BCNF decomposition is lossless or not jump directly to PART B of my answer.
PART A
To convert a relation R and a set of functional dependencies(FD's) into 3NF you can use Bernstein's Synthesis. To apply Bernstein's Synthesis -
First we make sure the given set of FD's is a minimal cover
Second we take each FD and make it its own sub-schema.
Third we try to combine those sub-schemas
For example in your case:
R = {A,B,C,D}
FD's = {A->B,C->D,AD->C,BC->A}
First we check whether the FD's is a minimal cover (singleton right-hand side , no extraneous left-hand side attribute, no redundant FD)
Singleton RHS: All the given FD's already have singleton RHS.
No extraneous LHS attribute: None of the FD's have extraneous LHS attribute that needs to e removed.
No redundant FD's: There is no redundant FD.
Hence the given set of FD's is already a minimal cover.
Second we make each FD its own sub-schema. So now we have - (the keys for each relation are in bold)
R1={A,D,C}
R2={B,C,A}
R3={C,D}
R4={A,B}
Third we see if any of the sub-schemas can be combined. We see that R1 and R2 already have all the attributes of R and hence R3 and R4 can be omitted. So now we have -
S1 = {A,D,C}
S2 = {B,C,A}
This is in 3NF. Now to check for BCNF we check if any of these relations (S1,S2) violate the conditions of BCNF (i.e. for every functional dependency X->Y the left hand side (X) has to be a superkey) . In this case none of these violate BCNF and hence it is also decomposed to BCNF.
PART B
When you apply Bernstein Synthesis as above to decompose R the decomposition is always dependency preserving. Now the question is, is the decomposition lossless? To check that we can follow the following method :
Create a table as shown in figure 1, with number of rows equal to the number of decomposed relations and number of column equal to the number of attributes in our original given R.
We put a in all the attributes that our present in the respective decomposed relation as in figure 1. Now we go through all the FD's {C->D,A->B,AD->C,BC->A} one by one and add a whenever possible. For example, first FD is C->D. Since both the rows in column C has a and there is an empty slot in second row of column D we put a a there as shown in the right part of the image. We stop as soon as one of the rows is completely filled with a which indicates that it is a lossless decomposition. If we go through all the FD's and none of the rows of our table get completely filled with a then it is a lossy decomposition.
Also, note if it is a lossy decomposition we can always make it lossless by adding one more relation to our set of decomposed relations consisting of all attributes of the primary key.
I suggest you see this video for more examples of this method. Also other way to check for lossless join decomposition which involves relational algebra.
Which non-trivial functional dependencies hold in the following table?
Can anyone explain step by step the rules please?
A B C
------------
a1 b2 c1
a2 b1 c6
a3 b2 c4
a1 b2 c5
a2 b1 c3
a1 b2 c7
I'll start with a disclaimer to state that my knowledge of functional dependencies is limited to what was explained in the Wikipedia article, and that I currently don't have the need nor the inclination to study up on it further.
However, since OP asked for clarification, I'll attempt to clarify how I obtained the seemingly correct answer that I posted in the comments.
First off, this is Wikipedia's definition:
Given a relation R, a set of attributes X in R is said to
functionally determine another set of attributes Y, also in R,
(written X → Y) if, and only if, each X value is associated with
precisely one Y value; R is then said to satisfy the functional
dependency X → Y.
Additionally, Wikipedia states that:
A functional dependency FD: X → Y is called trivial if Y is a
subset of X.
Taking these definitions, I arrive at the following two non-trivial functional dependencies for the given relation:
A → B
C → {A, B}
Identifying these was a completely inductive process. Rather than applying a series of rules, formulas and calculations, I looked at the presented data and searched for those constraints that satisfy the above definitions.
In this case:
A → B
There are three possible values presented for A: a1, a2 and a3. Looking at the corresponding values for B, you'll find the following combinations: a1 → b2, a2 → b1, and a3 → b2. Or, every value of A is associated with precisely one B value, conforming to the definition.
C → {A, B}
The same reasoning goes for this dependency. In this case, identifying it is a little bit easier as the values for C are unique in this relation. In this sense, C could be considered as a key. In database terms, a candidate key is exactly that: a minimal set of attributes that uniquely identifies every tuple.
Undoubtedly, there's a way to mathematically derive the functional dependencies from the data, but for simple cases like this, the inductive process seems to work just fine.
So, non-trivial functional dependencies in the above table are:
1. A->B
2. A,C->B
3. B,C->A
4. C->A,B