Let's say there is a grammar
S -> PQT
R -> T
U -> aU | bX
X -> Y
P -> bQ
Y -> SX | c | X
Q -> aRY
T -> U
There is a loop:
X -> Y
Y -> X
How to eliminate it when converting to CNF?
I don't think it's fine to add a rule to grammar (as in unit elimination)
X -> X, right, because it s basically another loop?
If X -> Y and Y -> X, the nonterminal symbols are interchangeable and you can safely replace all instances of either of the two with the other of the two, eliminating one of the two completely. As you also pointed out, rules of the form X -> X can be safely eliminated. So this grammar is equivalent to the one you give:
S -> PQT
R -> T
U -> aU | bX
P -> bQ
X -> SX | c
Q -> aRX
T -> U
And so is this one:
S -> PQT
R -> T
U -> aU | bY
P -> bQ
Y -> SY | c
Q -> aRY
T -> U
Related
Imagine we have a Context Free Grammer, CFG, as follows:
S -> a ...(1)
S -> SS ...(2)
And i derive a string in the specified language as follows:
S ..start state
SS ..using (2)
aS ...using (1) <- is this valid, like only applying the subsititution rule on 1 variable instead of all same variables?
So my question is if i were to apply rule (1) on "SS", do i have the option to apply (1) to only 1 S of the "SS" or do i have to apply to both of them?
You can apply the rule to only one S, or as many as you like. Here is a slightly more complicated example that maybe better illustrates the idea:
S -> a (1)
S -> b (2)
S -> SS (3)
So, what strings are in this language? Here are the first few strings with derivations:
a: S -> a
b: S -> b
aa: S -> SS -> aS -> aa
S -> SS -> Sa -> aa
ab: S -> SS -> aS -> ab
S -> SS -> Sb -> as
ba: S -> SS -> bS -> ba
S -> SS -> Sa -> ba
bb: S -> SS -> bS -> bb
S -> SS -> Sb -> bb
aaa: S -> SS -> aS -> aSS -> aaS -> aaa
S -> SS -> aS -> aSS -> aSa -> aaa
S -> SS -> Sa -> SSa -> aSa -> aaa
S -> SS -> Sa -> SSa -> Saa -> aaa
aab
...
Anyway, we find that the grammar generates all non-empty strings of a's and b's, including those with mixed up a's and b's. If we had to replace all S's with the same rule, we would get a much, much smaller language, if we'd get a (non-empty) language at all.
I have a function like this:
jac :: Int -> Int -> [Int] -> [Int] -> IOArray (Int,Int) Double -> IO Double
jac m k mu nu arr
| nu!!0 == 0 = return 1
| length nu > m && nu!!m > 0 = return 0
| m == 1 = return $ x!!0^(nu!!0) * theproduct (nu!!0)
| k == 0 && CONDITION = XXX
| otherwise = YYY
The CONDITION must check that that element (1,1) of the array arr is different from 0. But to get this element, one must do
element <- readArray arr (1,1)
I don't see how to do. Except with unsafePerformIO. Is it safe to use it here ? I mean:
| k == 0 && unsafePerformIO (readArray arr (1,1)) /= 0 = XXX
Otherwise, how could I do ?
Let's make a simplified version of your question.
Let's say we want to make the following function. It tells us whether or not both of the Int values are equal to 0. Problem is, it contains an IO. Your current method is this:
-- THIS IS BAD CODE! This could easily cause unexpected behaviour.
areBothZero :: Int -> IO Int -> IO Bool
areBothZero a b
| a == 0 && unsafePerformIO b == 0 = return True
| otherwise = return False
This shows a misunderstanding of monads. In Haskell, unsafePerformIO as a general rule shouldn't be used, unless you want to achieve a certain effect that pure computation cannot achieve. However, this kind of situation is perfectly achievable using the monad operations, which are, unlike unsafePerformIO, perfectly safe.
This is how we achieve this. Firsly, write the logic outside the context of IO:
areBothZeroLogic :: Int -> Int -> Bool
areBothZeroLogic a b
| a == 0 && b == 0 = True
| otherwise = False
Then, we pipe that up to the IO logic we want:
areBothZeroIO :: Int -> IO Int -> IO Bool
areBothZeroIO a mb = do
b <- mb -- Use do-notation to work with the value 'inside' the IO:
return $ areBothZeroLogic a b
Immediately, this separates IO logic from pure logic. This is a fundamental design principle in Haskell that you should always try to follow.
Now, onto your problem.
Your problem is much more messy and has several other issues, which suggests to me that you haven't considered how best to split the problem up into smaller peices. However, a better solution may look something like this, maybe with better names:
-- Look here! vvvvvv vvvvvv
jacPure :: Int -> Int -> [Int] -> [Int] -> Double -> Double
jacPure m k mu nu arrVal
| nu!!0 == 0 = 1
| length nu > m && nu!!m > 0 = 0
| m == 1 = x!!0^(nu!!0) * theproduct (nu!!0)
| k == 0 && arrVal /= 0 = XXX
| otherwise = YYY
jac :: Int -> Int -> [Int] -> [Int] -> IOArray (Int,Int) Double -> IO Double
jac m k mu nu arr = do
arrVal <- readArray arr (1,1) -- Use do-notation to work with the value 'inside' the IO:
return $ jacPure m k mu nu arrVal
You should see immediately why this is much better. When implementing logic, who cares what's going on in the IO domain? Including an IO in what should be pure logic is like telling an author about the acidity of the paper their book will be printed on—it isn't relevant to what their job is. Always separate logic and IO!
There are of course other ways of doing this, and some could very well be better than the way I have suggested. However, it is not possible to know with the code you have provided which the best path would be. You should aim to learn more about monads and get better at using them, so you can make this judgement on your own.
I suspect this question is borne from a lack of understanding of Monads and monadic operations. If you are a beginner, I recommend reading the relevant LYAH chapter, which I found helpful as a beginner too.
One option is to combine the last two cases:
jac m k mu nu arr
...
| k == 0 = do
element <- readArray arr (1,1)
case element of
0 -> YYY
_ -> XXX
| otherwise -> YYY
Suppose we have
areBothZero :: Int -> IOArray Int Int -> IO Bool
areBothZero a b
| a == 0 && unsafePerformIO (readArray b 0) == 0 = return True
| otherwise = return False
I think it's worth thinking about what can go wrong. Suppose I write
do
x <- areBothZero a b
-- Change the value in b[0]
y <- areBothZero a b
Now there are two identical function calls, so the compiler is perfectly free to rewrite this:
do
let m = areBothZero a b
x <- m
-- change b
y <- m
The first time we run m, we perform the IO, reading b and getting an action return True or return False. We run that action and bind the result to x. The next time, we already have an action, so we run it, producing the same result. Any change to b is ignored.
This is only one of the ways things can go wrong with unsafePerformIO, so watch out!
I think there are one and a half ways it's reasonable to use unsafePerformIO or (in some cases) unsafeDupablePerformIO routinely. The entirely reasonable one is to wrap an "essentially pure" FFI call that just performs a mathematical calculation in another language. The less reasonable one is to create a global IORef or (more often) MVar. I think this is less reasonable because global variables have a certain tendency to turn out not to be as global as you thought once a year or two has passed. Most other uses of these unsafe IO operations require very careful thought to get right These tend to be in libraries like monad-par and reflex that introduce whole new styles of computation to Haskell. They also tend to be subtly buggy, sometimes for years, until someone figures out just what needs to happen to make them right. (Not to toot my own horn too much, but I think I'm probably one of the top handful of people in the world at reasoning about unsafe IO, and I very much prefer to avoid it when possible. This stuff has tripped up some of the best Haskell programmers and most important GHC developers.)
I've found a solution. I pass the value of the array element in addition.
jac :: Int -> Int -> [Int] -> [Int] -> IOArray (Int,Int) Double -> Double -> IO Double
jac m k mu nu arr elt
| nu!!0 == 0 || m == 0 = return 1
| length nu > m && nu!!m > 0 = return 0
| m == 1 = return $ x!!0^(nu!!0) * theproduct (nu!!0)
| k == 0 && elt /= 0 = XXX
| otherwise = do
e <- readArray arr (1, 1)
jck <- jac (m-1) 0 nu nu arr e
......
Maybe my question was not precise enough...
Not terribly elegant, but should do:
jac :: Int -> Int -> [Int] -> [Int] -> IOArray (Int,Int) Double -> IO Double
jac m k mu nu arr
| nu!!0 == 0 = return 1
| length nu > m && nu!!m > 0 = return 0
| m == 1 = return $ x!!0^(nu!!0) * theproduct (nu!!0)
| otherwise = do
v <- readArray arr (1,1)
case () of
_ | k == 0 && v /= 0 -> XXX
| otherwise -> YYY
Alternatively, read from the array at the very beginning:
jac :: Int -> Int -> [Int] -> [Int] -> IOArray (Int,Int) Double -> IO Double
jac m k mu nu arr = do
v <- readArray arr (1,1)
case () of
_ | nu!!0 == 0 = return 1
| length nu > m && nu!!m > 0 = return 0
| m == 1 = return $ x!!0^(nu!!0) * theproduct (nu!!0)
| k == 0 && v /= 0 -> XXX
| otherwise -> YYY
How to write arity-generic functions in Agda? Is it possible to write fully dependent and universe polymorphic arity-generic functions?
I'll take an n-ary composition function as an example.
The simplest version
open import Data.Vec.N-ary
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Set γ}
-> (Y -> Z) -> N-ary n X Y -> N-ary n X Z
comp 0 g y = {!!}
comp (suc n) g f = {!!}
Here is how N-ary is defined in the Data.Vec.N-ary module:
N-ary : ∀ {ℓ₁ ℓ₂} (n : ℕ) → Set ℓ₁ → Set ℓ₂ → Set (N-ary-level ℓ₁ ℓ₂ n)
N-ary zero A B = B
N-ary (suc n) A B = A → N-ary n A B
I.e. comp receives a number n, a function g : Y -> Z and a function f, which has the arity n and the resulting type Y.
In the comp 0 g y = {!!} case we have
Goal : Z
y : Y
g : Y -> Z
hence the hole can be easily filled by g y.
In the comp (suc n) g f = {!!} case, N-ary (suc n) X Y reduces to X -> N-ary n X Y and N-ary (suc n) X Z reduces to X -> N-ary n X Z. So we have
Goal : X -> N-ary n X Z
f : X -> N-ary n X Y
g : Y -> Z
C-c C-r reduces the hole to λ x -> {!!}, and now Goal : N-ary n X Z, which can be filled by comp n g (f x). So the whole definition is
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Set γ}
-> (Y -> Z) -> N-ary n X Y -> N-ary n X Z
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
I.e. comp receives n arguments of type X, applies f to them and then applies g to the result.
The simplest version with a dependent g
When g has type (y : Y) -> Z y
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Y -> Set γ}
-> ((y : Y) -> Z y) -> (f : N-ary n X Y) -> {!!}
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
what should be there in the hole? We can't use N-ary n X Z as before, because Z is a function now. If Z is a function, we need to apply it to something, that has type Y. But the only way to get something of type Y is to apply f to n arguments of type X. Which is just like our comp but only at the type level:
Comp : ∀ n {α β γ} {X : Set α} {Y : Set β}
-> (Y -> Set γ) -> N-ary n X Y -> Set (N-ary-level α γ n)
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
And comp then is
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Y -> Set γ}
-> ((y : Y) -> Z y) -> (f : N-ary n X Y) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
A version with arguments with different types
There is the "Arity-generic datatype-generic programming" paper, that describes, among other things, how to write arity-generic functions, that receive arguments of different types. The idea is to pass a vector of types as a parameter and fold it pretty much in the style of N-ary:
arrTy : {n : N} → Vec Set (suc n) → Set
arrTy {0} (A :: []) = A
arrTy {suc n} (A :: As) = A → arrTy As
However Agda is unable to infer that vector, even if we provide its length. Hence the paper also provides an operator for currying, which makes from a function, that explicitly receives a vector of types, a function, that receives n implicit arguments.
This approach works, but it doesn't scale to fully universe polymorphic functions. We can avoid all these problems by replacing the Vec datatype with the _^_ operator:
_^_ : ∀ {α} -> Set α -> ℕ -> Set α
A ^ 0 = Lift ⊤
A ^ suc n = A × A ^ n
A ^ n is isomorphic to Vec A n. Then our new N-ary is
_->ⁿ_ : ∀ {n} -> Set ^ n -> Set -> Set
_->ⁿ_ {0} _ B = B
_->ⁿ_ {suc _} (A , R) B = A -> R ->ⁿ B
All types lie in Set for simplicity. comp now is
comp : ∀ n {Xs : Set ^ n} {Y Z : Set}
-> (Y -> Z) -> (Xs ->ⁿ Y) -> Xs ->ⁿ Z
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
And a version with a dependent g:
Comp : ∀ n {Xs : Set ^ n} {Y : Set}
-> (Y -> Set) -> (Xs ->ⁿ Y) -> Set
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
comp : ∀ n {Xs : Set ^ n} {Y : Set} {Z : Y -> Set}
-> ((y : Y) -> Z y) -> (f : Xs ->ⁿ Y) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
Fully dependent and universe polymorphic comp
The key idea is to represent a vector of types as nested dependent pairs:
Sets : ∀ {n} -> (αs : Level ^ n) -> ∀ β -> Set (mono-^ (map lsuc) αs ⊔ⁿ lsuc β)
Sets {0} _ β = Set β
Sets {suc _} (α , αs) β = Σ (Set α) λ X -> X -> Sets αs β
The second case reads like "there is a type X such that all other types depend on an element of X". Our new N-ary is trivial:
Fold : ∀ {n} {αs : Level ^ n} {β} -> Sets αs β -> Set (αs ⊔ⁿ β)
Fold {0} Y = Y
Fold {suc _} (X , F) = (x : X) -> Fold (F x)
An example:
postulate
explicit-replicate : (A : Set) -> (n : ℕ) -> A -> Vec A n
test : Fold (Set , λ A -> ℕ , λ n -> A , λ _ -> Vec A n)
test = explicit-replicate
But what are the types of Z and g now?
comp : ∀ n {β γ} {αs : Level ^ n} {Xs : Sets αs β} {Z : {!!}}
-> {!!} -> (f : Fold Xs) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
Recall that f previously had type Xs ->ⁿ Y, but Y now is hidden in the end of these nested dependent pairs and can depend on an element of any X from Xs. Z previously had type Y -> Set γ, hence now we need to append Set γ to Xs, making all xs implicit:
_⋯>ⁿ_ : ∀ {n} {αs : Level ^ n} {β γ}
-> Sets αs β -> Set γ -> Set (αs ⊔ⁿ β ⊔ γ)
_⋯>ⁿ_ {0} Y Z = Y -> Z
_⋯>ⁿ_ {suc _} (_ , F) Z = ∀ {x} -> F x ⋯>ⁿ Z
OK, Z : Xs ⋯>ⁿ Set γ, what type has g? Previously it was (y : Y) -> Z y. Again we need to append something to nested dependent pairs, since Y is again hidden, only in a dependent way now:
Πⁿ : ∀ {n} {αs : Level ^ n} {β γ}
-> (Xs : Sets αs β) -> (Xs ⋯>ⁿ Set γ) -> Set (αs ⊔ⁿ β ⊔ γ)
Πⁿ {0} Y Z = (y : Y) -> Z y
Πⁿ {suc _} (_ , F) Z = ∀ {x} -> Πⁿ (F x) Z
And finally
Comp : ∀ n {αs : Level ^ n} {β γ} {Xs : Sets αs β}
-> (Xs ⋯>ⁿ Set γ) -> Fold Xs -> Set (αs ⊔ⁿ γ)
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
comp : ∀ n {β γ} {αs : Level ^ n} {Xs : Sets αs β} {Z : Xs ⋯>ⁿ Set γ}
-> Πⁿ Xs Z -> (f : Fold Xs) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
A test:
length : ∀ {α} {A : Set α} {n} -> Vec A n -> ℕ
length {n = n} _ = n
explicit-replicate : (A : Set) -> (n : ℕ) -> A -> Vec A n
explicit-replicate _ _ x = replicate x
foo : (A : Set) -> ℕ -> A -> ℕ
foo = comp 3 length explicit-replicate
test : foo Bool 5 true ≡ 5
test = refl
Note the dependency in the arguments and the resulting type of explicit-replicate. Besides, Set lies in Set₁, while ℕ and A lie in Set — this illustrates universe polymorphism.
Remarks
AFAIK, there is no comprehensible theory for implicit arguments, so I don't know, how all this stuff will work, when the second function (i.e. f) receives implicit arguments. This test:
foo' : ∀ {α} {A : Set α} -> ℕ -> A -> ℕ
foo' = comp 2 length (λ n -> replicate {n = n})
test' : foo' 5 true ≡ 5
test' = refl
is passed at least.
comp can't handle functions, if the universe, where some type lies, depends on a value. For example
explicit-replicate' : ∀ α -> (A : Set α) -> (n : ℕ) -> A -> Vec A n
explicit-replicate' _ _ _ x = replicate x
... because this would result in an invalid use of Setω ...
error : ∀ α -> (A : Set α) -> ℕ -> A -> ℕ
error = comp 4 length explicit-replicate'
But it's common for Agda, e.g. you can't apply explicit id to itself:
idₑ : ∀ α -> (A : Set α) -> A -> A
idₑ _ _ x = x
-- ... because this would result in an invalid use of Setω ...
error = idₑ _ _ idₑ
The code.
I want to find the minimal cover for the following set of functional dependencies:
A -> BC
B -> C
A -> B
AB -> C
first step: Break down the RHS of each functional dependency into a single attribute:
A -> B
A -> C
B -> C
A -> B
AB -> C
then I will remove of the two A -> B, so we will get:
A -> B
A -> C
B -> C
AB -> C
second step: Trying to remove unnecessary attributes from the LHS of each functional dependency (whose LHS has 2 or more attributes):
for AB -> C , check if A is necessary by:
replace AB -> C with B -> C so if B+ contains C then A is unnecessary:
B+ = B (B -> C)
= BC (so A is unnecessary)
check if B is necessary by:
replace AB -> C with A -> C so if A+ contains C then B is unnecessary:
A+ = A (A -> B)
= AB (A -> C)
= ABC (so B is unnecessary)
now we have:
A -> B
A -> C
B -> C
third step: Trying to remove unecessary functional dependencies:
for A -> B check if A+ contains B without using A -> B:
A+ = A (A -> C)
= AC (so A -> B is necessary)
for A -> C check if A+ contains C without using A -> C:
A+ = A (A -> B)
= AB (so A -> C is necessary)
for B -> C check if B+ contains C without using B -> C:
B+ = B (so B -> C is necessary)
now we have:
A -> B
A -> C
B -> C
Finally, group the functional dependencies that have common LHS together:
A -> BC
B -> C
so we can say that these functional dependencies are the minimal cover of the set, is that true ? and how we can deduce the key(s) of the set?
To compute a canonical cover for F:
Use the union rule to replace any dependencies with common left-side.
so Combine A ->BC and A -> B into A -> BC
Set is now {A -> BC, B -> C, AB -> C}
A is extraneous in AB -> C
Check if the result of deleting A from AB -> C is implied by the other dependencies
Yes: in fact, B -> C is already present!
Set is now {A -> BC, B -> C}
C is extraneous in A -> BC
Check if A -> C is logically implied by A B and the other dependencies
Yes: using transitivity on A -> B and B -> C.
Can use attribute closure of A in more complex cases
The canonical cover is: A ->B, B -> C
source:Korth,sudarshan DBMS book.
Consider the relation schema R = ABCDG with following functional dependencies (FD)
AB -> C
C -> A
BC -> D
ACD -> D
D -> EG
BE -> C
CG -> BD
CE -> AG
Compute closure of BD and CA.
How we can find them?
The closure of a set of functional dependencies, F, means all the functional dependencies logically implied by F. For example, given
BC -> D, and
D -> EG
we can apply Armstrong's axioms to derive
D -> E,
D -> G,
BC -> E,
BC -> G,
and so on.
When you've derived every FD implied by F, you have the closure of F with respect to R. In your case, you want to derive every FD logically implied by BD and CA.
As far as I know, every textbook on relational database theory includes one or more algorithms to compute the closure of a set of functional dependencies. Your best bet is to follow one of the algorithms in your textbook, if you have one.
Here is a simple algorithm for computing the closure of a set of attributes X:
Closure(X, F)
1 INITIALIZE V:= X
2 WHILE there is a Y -> Z in F such that:
- Y is contained in V and
- Z is not contained in V
3 DO add Z to V
4 RETURN V