As it says in the title I have trouble understanding why if we have X->A and Y->B then why is it wrong to write XY->AB. They way I understand it, if A is functionally dependent of X and B is functionally dependent of Y, then when we have XY on the left side we should have their corresponding values on the right side. Anyway my book says that this is wrong, so can anyone give me an example where this is proven wrong ? Thanks in advance :)
You're going about this the wrong way.
In order for "{X->A, Y->B}, therefore XY->AB" to be true, you need to prove that you can derive XY->AB from {X->A, Y->B}, using only Armstrong's axioms and the additional rules derived from Armstrong's axioms.
If X uniquely determines A and similarly Y uniquely determines B ,then any combination of XY uniquely determines AB.
Hence , X->A ,Y->B infers XY->AB is true.
More supporting links.
http://en.wikipedia.org/wiki/Functional_dependency…
See the composition rule here. Not crebile enough ?
Then in the following link , Slide 9 says that
Textbook, page 341: ”… X A, and Y B does not imply that XY AB.”
Prove that this statement is wrong.
http://www.ida.liu.se/~TDDD37/fo/fo-normalization
Moreover, Mike's answer is trying to prove the "vice versa" , which may not necessarily be true.
Related
I am learning about databases, and i came across this:
Table P(A,B,C,D,E). The FD's are: AB->CDE, C->D, D->B, D->E. Which of
the following FP's are in closure of P: 1)A->C 2)C->A 3)C->B
The correct answer was marked as 3). Working backwards, i can work out that "closure of P" are all FP's in table P, but i do not know if that is correct.
I thought closures where only for attributes (showing what attributes you can get from a given attribute), rather than the whole table. Was there a mistake in the problem, or am i missing some information about closures?
The question is asking which of those three answers are implied by the set of functional dependencies you're given. For example, AB->CDE implies AB->C, AB->D, and AB->E. Also, C->D and D->B implies C->B (the answer).
To determine which of the three possible answers are right, compute the closure of each left-hand side, and see if the possible answer is in the closure. The closure of C is BCDE.
See Armstrong's axioms
I am going to implement a personal recommendation system using Apriori algorithm.
I know there are three useful concepts as 'support',confidence' and 'lift. I already know the meaning of them. Also I know how to find the frequent item sets using support concept. But I wonder why confidence and lift concepts are there for if we can find frequent item sets using support rule?
could you explain me why 'confidence' and 'lift' concepts are there when 'support' concept is already applied and how can I proceed with 'confidence' and 'lift' concepts if I have already used support concept for the data set?
I would be highly obliged if you could answer with SQL queries since I am still an undergraduate. Thanks a lot
Support alone yields many redundant rules.
e.g.
A -> B
A, C -> B
A, D -> B
A, E -> B
...
The purpose of lift and similar measures is to remove complex rules that are not much better than the simple rule.
In above case, the simple rule A -> B may have less confidence than the complex rules, but much more support. The other rules may be just coincidence of this strong pattern, with a marginally stronger confidence because of the smaller sample size.
Similarly, if you have:
A -> B confidence: 90%
C -> D confidence: 90%
A, C -> B, D confidence: 80%
then the last rule is even bad, despite the high confidence!
The first two rules yield the same outcome, but with higher confidence. So that last rule shouldn't be 80% correct, but -10% correct if you assume the first two rules to hold!
Thus, support and confidence are not enough to consider.
In SHOIN(D) that is equivalent to the DL family used by OWL-DL;
Is this expression legal:
F ⊑ (≤1 r. D) ⊓ (¬ (=0 r. D))
Where F, D are concepts, r is a role. I want to express that each instance of F is related to at most one instance of D through r, and not to zero instances.
In general, how to decide that some expression is legal w. r. t. a specific variation of DL? I thought that using BNF syntax of the variation may be what I'm targeting.
One easy way is to check whether you can write it in Protege. Most of the things that you can write in Protege will be legal OWL-DL. In Protege you can write:
F SubClassOf ((r max 1 D) and not(r exactly 0 D))
Of course, saying that something has at most 1 value, and not exactly one would be exactly the same as saying that it has exactly 1:
F SubClassOf r exactly 1 D
But there are a few things that you'll be able to do in Protege that won't be legal OWL-DL. The more direct way to find out what these are is the standard, specifically §11 Global Restrictions on Axioms in OWL 2 DL. Generally the only problems you might run into is trying to use composite properties where you're not allowed to.
If you don't want to check by hand, then you could try uploading your ontology into the OWL Validator and selecting the OWL2 DL profile.
needing desperate help with understanding boyce codd and finding the candidate keys.
i found a link here http://djitz.com/neu-mscs/how-to-find-candidate-keys/ which i have understood for most part but i get stuck
e.g
(A B C D E F)
A B → C D E
B C D → A
B C E → A D
B D → E
right as far as i understand from the link i know you find the common sets from the left which is only B, and common sets from the right which are none
now where do i go from here? i know all candidate sets will have B in them but i need guidance on finding candidate sets after that. someone explain in simple language
The linked article isn't written particularly well. (That's an observation, not a criticism. The author's first language isn't English.) I'll try to rewrite the algorithm. This isn't me telling you how to do this. It's my interpretation of how the original author is telling you to do this.
Identify the attributes that are on neither the left side nor right side of any FD.
Identify the attributes that are only on the right side of any FD.
Identify the attributes that are only on the left side of any FD.
Combine the attributes from steps 1 and 3.
Compute the closure of the attributes from step 4. If the closure comprises all the attributes, then the attributes from step 4 make up the only candidate key. (No matter how many candidate keys there are, every one of them must contain these attributes.)
Identify the attributes not included in step 4 and step 2.
Compute the closure of the attributes from step 4 plus every possible combination of attributes from step 6.
So for the FDs you posted, you'd end up with this.
{F}
{}
{B}
{BF}
The closure of {BF} is {BF}. That's not all the attributes. (But every candidate key must contain {BF}.)
{ACDE}
Compute the closure of these sets of attributes.
{ABF}
{CBF}
{DBF}
{EBF}
{ACBF}
{ADBF}
{AEBF}
{CDBF}
{CEBF}
{DEBF}
{ACDBF}
{ADEBF}
{CDEBF}
If I got those combinations right, every candidate key will be found among the possibilities in step 7. In your example, there are 3 candidate keys.
http://www.sroede.nl/projects/fdhelper.aspx
this would help'just put in ur relation and FD's
click generate at the bottom
I am half way reading the OWL2 primer and is having problem understanding the universal quantification
The example given is
EquivalentClasses(
:HappyPerson
ObjectAllValuesFrom( :hasChild :HappyPerson )
)
It says somebody is a happy person exactly if all their children are happy persons. But what if John Doe has no children can he be an instance of HappyPerson? What about his parent?
I also find this part very confusing, it says:
Hence, by our above statement, every childless person would be qualified as happy.
but wouldn't it violate the ObjectAllValuesFrom() constructor?
I think the primer actually does quite a good job at explaining this, particularly the following:
Natural
language indicators for the usage of
universal quantification are words
like “only,” “exclusively,” or
“nothing but.”
To simplify this a bit further, consider the expression you've given:
HappyPerson ≡ ∀ hasChild . HappyPerson
This says that a HappyPerson is someone who only has children who are also HappyPerson (are also happy). Logically, this actually says nothing about the existence of instances of happy children. It simply serves as a universal constraint on any children that may exist (note that this includes any instances of HappyPerson that don't have any children).
Compare this to the existential quantifier, exists (∃):
HappyPerson ≡ ∃ hasChild . HappyPerson
This says that a HappyPerson is someone who has at least one child that is also a HappyPerson. In constrast to (∀), this expression actually implies the existence of a happy child for every instance of a HappyPerson.
The answer, albeit initially unintuitive, lies in the interpretation/semantics of the ObjectAllValuesFrom OWL construct in first-order logic (actually, Description Logic). Fundamentally, the ObjectAllValuesFrom construct relates to the logical universal quantifier (∀), and the ObjectSomeValuesFrom construct relates to the logical existential quantifier (∃).
I am facing the same kind of issue while reading the "OWL 2 Web Ontology Language Primer (Second Edition - 2012)" and I am not convinced that the answer by Sharky clarifies the issue.
At page 15, when introducing the universal quantifier ∀, the book states:
"Another property restriction, called universal quantification is used to describe a class of individuals for which all related individuals must be instances of a given class. We can use the following statement to indicate that somebody is a happy person exactly if all their children are happy persons."
[I omit the OWL statements in the different sintaxes, they can be found in the book.]
I think that a more formal and may be less ambiguos representation of what the author states is
(1) HappyPerson = {x | ∀y (x HasChild y → y ∈ HappyPerson)}
I hope every reader understands this notation, because I find the notation used in the answer less clear (or may be I am just not accustomed to it).
The book proceeds:
"... There is one particular misconception concerning the universal role restriction. As an example, consider the above happiness axiom. The intuitive reading suggests that in order to be happy, a person must have at least one happy child [my note: actually the definition states that every children should be happy, not just at least one, in order for his/her parents to be happy. This appears to be a lapsus of the author]. Yet, this is not the case: any individual that is not a “starting point” of the property hasChild is a class member of any class defined by universal quantification over hasChild. Hence, by our above statement, every childless person would be qualified as happy . ..."
That is, the author states that (assume '~' for logical NOT), given
(2) ChildessPerson = { x | ~∃y( x HasChild y)}
then (1) and the meaning of ∀ imply
(3) ChildessPerson ⊂ HappyPerson
This does not seem true to me.
If it were true then every child, as far as s/he is a childless person, is happy and so only some parents can be unhappy persons.
Consider this model:
Persons = {a,b,c}, HasChild = {(a,b)}, HappyPerson={a,b}
and c is unhappy (independently from the close world or open world assumption). It is a possible model, which falsifies the thesis of the author.