predicate calculus , implication - artificial-intelligence

I'm new in predicate , and AI in general , but I know that the implication means that it is false if the before "->" is true and after it is false , other wise it's true .
but how it's using if I don't know the after "->" side ?
such that , the parent(X, Y) must be known to know if this predicate is false or true , although the parent(X, Y) it's what we want to infer.
∀ X ∀ Y father(X, Y) ∨ mother(X, Y) → parent(X, Y)

Take a look at this article on wikipedia here. You have multiple choice there, you statement can be true but also false. This was also on udacity Ai course and they have just put it in a separate category, partially known. It is still better than unknown but you'll have to use something else.

Related

Prolog: How do I remove symmetrical values in predicates

I have a question regarding the removal of symmetrical values in
my predicates. These predicates are in my database and I used
assertz to add them there.
So I have:
foo(a,b).
foo(b,a).
foo(c,d).
foo(e,f).
foo(f,e).
I'm trying to remove
foo(b,a).
foo(f,e).
An I tried to make this rule:
remove :- foo(A,B),foo(B,A),retract(foo(B,A)).
However, this removes all the predicates in my DB and I don't know how
to prevent that.
If someone could help me I'd really appreciate it!
Thank you.
There are two distinct semantics for retract/1:
immediate update view: upon backtracking, retracted clauses can no longer be seen (they became invisible immediately).
logical update view: upon backtracking, retracted clauses can still be seen (they became invisible only on the next predicate call). This update view is the ISO standard.
In the logical update view, for example, when the predicate remove/1 is called:
First it sees foo(a,b) and foo(b,a) and hence it retracts foo(b,a).
Afterward, upon backtracking, it sees foo(b,a) and foo(a,b) and hence it also retracts foo(a,b).
To solve the problem, you can use the ISO built-in predicate once/1 (which prevents backtracking).
:- dynamic foo/2.
foo(a,b).
foo(b,a).
foo(c,d).
foo(e,f).
foo(f,e).
remove :-
once( ( foo(A, B),
foo(B, A),
retract(foo(B, A)) ) ).
To retract only one symmetrical fact, you can ask:
?- listing(foo), remove, listing(foo).
:- dynamic foo/2.
foo(a, b).
foo(b, a). % <== only this fact is retracted!
foo(c, d).
foo(e, f).
foo(f, e).
:- dynamic foo/2.
foo(a, b).
foo(c, d).
foo(e, f).
foo(f, e).
true.
To retract all symmetrical facts, you can define:
remove_all_sym :-
( remove
-> remove_all_sym
; true ).
Example:
?- listing(foo), remove_all_sym, listing(foo).
:- dynamic foo/2.
foo(a, b).
foo(b, a). % <== this fact is retracted!
foo(c, d).
foo(e, f).
foo(f, e). % <== this fact is retracted!
:- dynamic foo/2.
foo(a, b).
foo(c, d).
foo(e, f).
NOTE A better alternative would be to avoid inserting symmetrical facts into the database:
assert_foo(A, B) :-
( foo(B, A)
-> true
; assertz(foo(A, B)) ).

Z3 Forall with array

Z3 provides unknown for the simple problem:
(assert
(forall ((y (Array Int Int)))
(= (select y 1) 0))
)
(check-sat)
I've found that it becomes sat if negate the forall, but this seems like a particularly simple thing to be unable to solve.
This is causing issues because the class of problems I want to solve are more like,
(declare-fun u () Int)
(assert
(forall ((y (Array Int Int)) )
(=>
(= u 0) (<= (select y 1) 0))
)
)
(check-sat)
Where negating the forall alone is not the same problem, so that cannot be done here. Is there some way to pose this style of problem to Z3 to get an un/sat result?
Problems with quantifiers are always problematic with SMT solvers, especially if they involve arrays and alternating quantifiers like in your example. You essentially have exits u. forall y. P(u, y). Z3, or any other SMT solver, will have hard time dealing with these sorts of problems.
When you have a quantified assertion like you do where you have forall's either at the top-level or nested with exists, the logic becomes semi-decidable. Z3 uses MBQI (model-based quantifier instantiation) to heuristically solve such problems, but it more often than not fails to do so. The issue isn't merely that z3 is not capable: There's no decision procedure for such problems, and z3 does its best.
You can try giving quantifier patterns for such problems to help z3, but I don't see an easy way to apply that in your problem. (Quantifier patterns apply when you have uninterpreted functions and quantified axioms. See https://rise4fun.com/z3/tutorialcontent/guide#h28). So, I don't think it'll work for you. Even if it did, patterns are very finicky to program with, and not robust with respect to changes in your specification that might otherwise look innocuous.
If you're dealing with such quantifiers, SMT solvers are probably just not a good fit. Look into semi-automated theorem provers such as Lean, Isabelle, Coq, etc., which are designed to deal with quantifiers in a much more disciplined way. Of course, you lose full automation, but most of these tools can use an SMT solver to discharge subgoals that are "easy" enough. That way, you still do the "heavy-lifting" manually, but most subgoals are automatically handled by z3. (Especially in the case of Lean, see here: https://leanprover.github.io/)
There's one extra closing (right) parentheses, which needs to be removed. Also, add assert before the forall statement.
(assert ( forall ( (y (Array Int Int) ) )
(= (select y 1) 0)
))
(check-sat)
Run the above code and you should get unsat as the answer.
For the second program, alias' answer may be useful to you.

OWL-DL; determining if an expression is legal

In SHOIN(D) that is equivalent to the DL family used by OWL-DL;
Is this expression legal:
F ⊑ (≤1 r. D) ⊓ (¬ (=0 r. D))
Where F, D are concepts, r is a role. I want to express that each instance of F is related to at most one instance of D through r, and not to zero instances.
In general, how to decide that some expression is legal w. r. t. a specific variation of DL? I thought that using BNF syntax of the variation may be what I'm targeting.
One easy way is to check whether you can write it in Protege. Most of the things that you can write in Protege will be legal OWL-DL. In Protege you can write:
F SubClassOf ((r max 1 D) and not(r exactly 0 D))
Of course, saying that something has at most 1 value, and not exactly one would be exactly the same as saying that it has exactly 1:
F SubClassOf r exactly 1 D
But there are a few things that you'll be able to do in Protege that won't be legal OWL-DL. The more direct way to find out what these are is the standard, specifically §11 Global Restrictions on Axioms in OWL 2 DL. Generally the only problems you might run into is trying to use composite properties where you're not allowed to.
If you don't want to check by hand, then you could try uploading your ontology into the OWL Validator and selecting the OWL2 DL profile.

In database design: "If A -> C and B -> C, then either A->B or B->A" is false. Why?

This was a homework question, but I got it wrong and I'm trying to figure out why. I originally stated that it was true. I know from logic that it can be the case that A is true and B is false and thus the statement doesn't hold, but databases don't work that way, right?
I understood this to mean "A determines C", "B determines C", then either A determines B or B determines A. Which seems like it must be true, because I can't think of any examples where this is false! Could anyone post an example to illustrate why I'm wrong?
A Cat is a Mammal
A Dog is a Mammal
A Cat is a Dog or a Dog is a Cat.
After a bit more thought, I think the "real" answer revolves somewhere in here:
Wiki: Functional Dependency
Reflexivity: If Y is a subset of X, then X → Y
Since we are talking Databases, I'm fairly certain the A->C and B->C then A->B or B->A is REALLY talking about sets:
A is a subset of C
B is a subset of C
A is a subset of B <- Not necessarily True
B is a subset of A <- Not necessarily True
Given a list of mammals... Cats are a subset of Mammals... Dogs are a Subset of Mammals... Cats/Dogs aren't subsets of each other.

Inference rules for functional dependencies, X->A, Y->B, XY->AB

As it says in the title I have trouble understanding why if we have X->A and Y->B then why is it wrong to write XY->AB. They way I understand it, if A is functionally dependent of X and B is functionally dependent of Y, then when we have XY on the left side we should have their corresponding values on the right side. Anyway my book says that this is wrong, so can anyone give me an example where this is proven wrong ? Thanks in advance :)
You're going about this the wrong way.
In order for "{X->A, Y->B}, therefore XY->AB" to be true, you need to prove that you can derive XY->AB from {X->A, Y->B}, using only Armstrong's axioms and the additional rules derived from Armstrong's axioms.
If X uniquely determines A and similarly Y uniquely determines B ,then any combination of XY uniquely determines AB.
Hence , X->A ,Y->B infers XY->AB is true.
More supporting links.
http://en.wikipedia.org/wiki/Functional_dependency…
See the composition rule here. Not crebile enough ?
Then in the following link , Slide 9 says that
Textbook, page 341: ”… X A, and Y B does not imply that XY AB.”
Prove that this statement is wrong.
http://www.ida.liu.se/~TDDD37/fo/fo-normalization
Moreover, Mike's answer is trying to prove the "vice versa" , which may not necessarily be true.

Resources