Can I make a raw datatype from this kind of signature type? - generic-programming

I'd like to split up my definition of monoids into multiple parts:
The signature of monoids
The monoid laws, as a relation
Witnesses of equality for elements that are in this relation
My current idea is to do it like the following:
data MonoidSig A : Type → Type₁ where
ε₀ : MonoidSig A A
_⋄₀_ : MonoidSig A (A → A → A)
RawMonoid : Type → Type₁
RawMonoid A = ∀ {B} → MonoidSig A B → B
module _ {A : Type} (rawMonoid : RawMonoid A) where
private
ε = rawMonoid ε₀
_⋄_ = rawMonoid _⋄₀_
data MonoidLaw : A → A → Type where
unit-l : ∀ x → MonoidLaw (ε ⋄ x) x
unit-r : ∀ x → MonoidLaw (x ⋄ ε) x
assoc : ∀ x y z → MonoidLaw ((x ⋄ y) ⋄ z) (x ⋄ (y ⋄ z))
Lawful : ∀ {A} (raw : RawMonoid A) → Set
Lawful raw = ∀ {x y} → MonoidLaw raw x y → x ≡ y
Monoid : (AIsSet : isSet A) → Type₁
Monoid {A = A} AIsSet = Σ[ raw ∈ RawMonoid A ] (Lawful raw)
Now, I'd like to make a datatype for free monoids as a quotient type of raw syntax quotiented by the monoid laws. But I haven't figured out how to get rid of the RawFreeMonoid definition below, and make it from MonoidSig somehow:
open import Cubical.HITs.SetQuotients
data RawFreeMonoid A : Type where
⟨_⟩ : A → RawFreeMonoid A
ε : RawFreeMonoid A
_⋄_ : RawFreeMonoid A → RawFreeMonoid A → RawFreeMonoid A
rawFreeMonoid : (A : Type) → RawMonoid (RawFreeMonoid A)
rawFreeMonoid A ε₀ = ε
rawFreeMonoid A _⋄₀_ = _⋄_
FreeMonoid : Type → Type
FreeMonoid A = RawFreeMonoid A / MonoidLaw (rawFreeMonoid A)
So that is my question: is there a way to define FreeMonoid in this way, without writing out by hand the RawFreeMonoid and rawFreeMonoid definitions?

Nice question! You can do it as follows (where I prefer to use an actual record type instead of an impredicative encoding):
open import Cubical.Data.List
record Signature : Type₁ where
field
Sort : Type₀
Symbol : (domain : List Sort) → (codomain : Sort) → Type₀
data Vector {A : Type₀} (B : A → Type₀) : List A → Type₀ where
[] : Vector B []
_∷_ : {x : A} {xs : List A} → B x → Vector B xs → Vector B (x ∷ xs)
module _ (Σ : Signature) where
open Signature Σ
data Term : Sort → Type₀ where
_·_ : {dom : List Sort} {cod : Sort} → (f : Symbol dom cod) → Vector Term dom → Term cod
For any given signature Σ, Term Σ will then be the free Σ-structure. More precisely, for any sort s of Σ, the type Term Σ s will be the type of terms of sort s.
The signature for monoids can be defined as follows:
open import Cubical.Data.Unit
data MonoidSymbol : List Unit → Unit → Type₀ where
ε₀ : MonoidSymbol [] tt
⋄₀ : MonoidSymbol (tt ∷ tt ∷ []) tt
monoidSignature : Signature
monoidSignature = record { Sort = Unit; Symbol = MonoidSymbol }
Edit in response to the comment: You are right, Term monoidSignature is the free raw monoid, not the free monoid. I put up code for constructing the quotient here. I believe that in this code, the laws are specified in the way you want:
-- `Structure` is defined in the linked code.
module _ (A : Structure monoidSignature) where
open Structure A
ε = op ε₀
_⋄_ = op ⋄₀
data MonoidLaw : Carrier tt → Carrier tt → Type₀ where
unitₗ : (x : Carrier tt) → MonoidLaw (ε ⋄ x) x
unitᵣ : (x : Carrier tt) → MonoidLaw (x ⋄ ε) x
assoc : (x y z : Carrier tt) → MonoidLaw ((x ⋄ y) ⋄ z) (x ⋄ (y ⋄ z))

Related

The number of different minimal cover possible are?

Consider R(A,B,C,D,E,F,G) be a relational schema with the following functional dependencies:
AC->G, D->EG, BC->D, CG->BD, ACD->B, CE->AG. The number of different minimal cover possible are___________?
Actually in this question we were supposed to find all the possible minimal covers. I used this video
So using that theory i tried doing this question but end up getting only 2 minimal covers and then answer given in my text book is 4 .
The minimal covers I got are:
1)
D->E,D->G,BC->D,CG->D,AC->B(#),CE->A
2)
AC->G,D->E,D->G,BC->D,CG->D,CD->B(#),
CE->A
Actually the video gives only standard procedure to FIND a minimal cover. but the problem is a bit tricky as it asks about how MANY minimal covers we can find. I am applying the algorithm right...the only issue is that I am unable to find how many more minimal covers can be possible for the given set of FD's
A common algorithm to produce a minimal cover consists of three steps:
Split the right part, producing FDs with only one attribute on the right part.
For each left part with more than one attribute, try to remove each attribute in turn and see if the remaining can still determine the right part. In this case, eliminate the attribute from the left part.
For each remaining dependency, try to see if it can be eliminated.
In your case the first step produces:
F = { A C → G
A C D → B
B C → D
C G → B
C G → D
C E → A
C E → G
D → E
D → G }
In the second step, we must check the first seven dependencies. For each dependency A1A2...An -> B we try to eliminate in turn each Ai and see
if B is included in the closure of the remaining attributes (the closure taken with respect to F). In this case we have two possibilities: from ACD -> B we can eliminate either A or D. So we have now two different set of dependencies:
F1 = { A C → G
C D → B
B C → D
C G → B
C G → D
C E → A
C E → G
D → E
D → G }
and
F2 = { A C → G
A C → B
B C → D
C G → B
C G → D
C E → A
C E → G
D → E
D → G }
Now we can apply the last step: for each dependency X -> A we can see if A is included in the closure of X, X+ with respect to all the other dependencies. In this case, we can eliminate that dependency.
The result will depend, in general, from the order in which we apply those checks.
Here there are four different canonical covers:
G1 = { A C → G
B C → D
C G → B
C E → A
D → E
D → G }
G2 = { A C → G
B C → D
C G → D
C E → A
C D → B
D → E
D → G }
G3 = { A C → B
B C → D
C G → B
C E → A
D → E
D → G }
G4 = { A C → B
B C → D
C G → D
C E → A
D → E
D → G }
Note: it is not clear to me if there are other canonical covers.

Is A → B the same as B → A when speaking of functional dependencies?

This might very simple but I just had to check with you guys.
When it comes to databases, does the arrow in literature imply vise versa on equality?
Meaning, is A → B considered the SAME as B → A, in particular when it comes to databases and functional dependencies?
Please read the reference(s) you were given for FDs (functional dependencies).
A FD is an expression of the form "A → B" for sets of attributes A & B. So if A and B are different, A → B is a different FD than B → A.
For a relation value or variable R, "A → B holds in R" and "A → B in R" say that if two R tuples have the same subtuple for A then they have the same subtuple for B.
Is A → B in R equivalent to B → A in R? If A and B are the same set, then yes. But what if they aren't?
X Y
a 1
b 1
{X} → {Y} holds in that relation value. {X} <> {Y}. Does {Y} → {X} also hold?

Can BCNF decomposition preserve all functional dependencies given F = {AB -> E, BC -> G, C-> BG, CD->A, EC->D, G->CH}?

Given F = {AB -> E, BC -> G, C-> BG, CD->A, EC->D, G->CH}, perform a BCNF decomposition and check whether it preserves all functional dependencies.
The minimal cover is R = {AB->E,C->B,C->G,CD->A,EC->D,G->C,G->H}
I performed on R a BCNF decomposition(it is a must to perform on the minimal cover) and I stayed with two dependencies of which one is preserved and one isn't preserved. In the answers they tell me that all of the dependencies are preserved. Can please anyone confirm this?
ABE, CBG, CDA, CED, GCH are in BCNF and loosless join and dependency preserving. relation keys are in bold
There is always a possibility to add a new relation for preserving a dependency as long as this new relation is in BCNF.
Starting from the canonical cover, we can see that the determinant of A B → E is not a superkey and so R can be replaced by:
R1 < (A B E) , { A B → E } >
and:
R2 < (A B C D G H) ,
{ G → C
G → H
C → B
C → G
C D → A
A B C → D } >
In R2 the determinant of G → C is not a superkey and so R2 can be replaced by:
R3 < (B C G H) ,
{ G → C
G → H
C → B
C → G } >
and:
R4 < (A D G) ,
{ D G → A
A G → D } >
So, the final decomposition is:
R1 < (A B E) ,
{ A B → E } >
R3 < (B C G H) ,
{ G → C
G → H
C → B
C → G } >
R4 < (A D G) ,
{ D G → A
A G → D } >
and the dependency:
{ C E → D }
is not preserved.

Arity-generic programming in Agda

How to write arity-generic functions in Agda? Is it possible to write fully dependent and universe polymorphic arity-generic functions?
I'll take an n-ary composition function as an example.
The simplest version
open import Data.Vec.N-ary
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Set γ}
-> (Y -> Z) -> N-ary n X Y -> N-ary n X Z
comp 0 g y = {!!}
comp (suc n) g f = {!!}
Here is how N-ary is defined in the Data.Vec.N-ary module:
N-ary : ∀ {ℓ₁ ℓ₂} (n : ℕ) → Set ℓ₁ → Set ℓ₂ → Set (N-ary-level ℓ₁ ℓ₂ n)
N-ary zero A B = B
N-ary (suc n) A B = A → N-ary n A B
I.e. comp receives a number n, a function g : Y -> Z and a function f, which has the arity n and the resulting type Y.
In the comp 0 g y = {!!} case we have
Goal : Z
y : Y
g : Y -> Z
hence the hole can be easily filled by g y.
In the comp (suc n) g f = {!!} case, N-ary (suc n) X Y reduces to X -> N-ary n X Y and N-ary (suc n) X Z reduces to X -> N-ary n X Z. So we have
Goal : X -> N-ary n X Z
f : X -> N-ary n X Y
g : Y -> Z
C-c C-r reduces the hole to λ x -> {!!}, and now Goal : N-ary n X Z, which can be filled by comp n g (f x). So the whole definition is
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Set γ}
-> (Y -> Z) -> N-ary n X Y -> N-ary n X Z
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
I.e. comp receives n arguments of type X, applies f to them and then applies g to the result.
The simplest version with a dependent g
When g has type (y : Y) -> Z y
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Y -> Set γ}
-> ((y : Y) -> Z y) -> (f : N-ary n X Y) -> {!!}
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
what should be there in the hole? We can't use N-ary n X Z as before, because Z is a function now. If Z is a function, we need to apply it to something, that has type Y. But the only way to get something of type Y is to apply f to n arguments of type X. Which is just like our comp but only at the type level:
Comp : ∀ n {α β γ} {X : Set α} {Y : Set β}
-> (Y -> Set γ) -> N-ary n X Y -> Set (N-ary-level α γ n)
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
And comp then is
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Y -> Set γ}
-> ((y : Y) -> Z y) -> (f : N-ary n X Y) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
A version with arguments with different types
There is the "Arity-generic datatype-generic programming" paper, that describes, among other things, how to write arity-generic functions, that receive arguments of different types. The idea is to pass a vector of types as a parameter and fold it pretty much in the style of N-ary:
arrTy : {n : N} → Vec Set (suc n) → Set
arrTy {0} (A :: []) = A
arrTy {suc n} (A :: As) = A → arrTy As
However Agda is unable to infer that vector, even if we provide its length. Hence the paper also provides an operator for currying, which makes from a function, that explicitly receives a vector of types, a function, that receives n implicit arguments.
This approach works, but it doesn't scale to fully universe polymorphic functions. We can avoid all these problems by replacing the Vec datatype with the _^_ operator:
_^_ : ∀ {α} -> Set α -> ℕ -> Set α
A ^ 0 = Lift ⊤
A ^ suc n = A × A ^ n
A ^ n is isomorphic to Vec A n. Then our new N-ary is
_->ⁿ_ : ∀ {n} -> Set ^ n -> Set -> Set
_->ⁿ_ {0} _ B = B
_->ⁿ_ {suc _} (A , R) B = A -> R ->ⁿ B
All types lie in Set for simplicity. comp now is
comp : ∀ n {Xs : Set ^ n} {Y Z : Set}
-> (Y -> Z) -> (Xs ->ⁿ Y) -> Xs ->ⁿ Z
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
And a version with a dependent g:
Comp : ∀ n {Xs : Set ^ n} {Y : Set}
-> (Y -> Set) -> (Xs ->ⁿ Y) -> Set
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
comp : ∀ n {Xs : Set ^ n} {Y : Set} {Z : Y -> Set}
-> ((y : Y) -> Z y) -> (f : Xs ->ⁿ Y) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
Fully dependent and universe polymorphic comp
The key idea is to represent a vector of types as nested dependent pairs:
Sets : ∀ {n} -> (αs : Level ^ n) -> ∀ β -> Set (mono-^ (map lsuc) αs ⊔ⁿ lsuc β)
Sets {0} _ β = Set β
Sets {suc _} (α , αs) β = Σ (Set α) λ X -> X -> Sets αs β
The second case reads like "there is a type X such that all other types depend on an element of X". Our new N-ary is trivial:
Fold : ∀ {n} {αs : Level ^ n} {β} -> Sets αs β -> Set (αs ⊔ⁿ β)
Fold {0} Y = Y
Fold {suc _} (X , F) = (x : X) -> Fold (F x)
An example:
postulate
explicit-replicate : (A : Set) -> (n : ℕ) -> A -> Vec A n
test : Fold (Set , λ A -> ℕ , λ n -> A , λ _ -> Vec A n)
test = explicit-replicate
But what are the types of Z and g now?
comp : ∀ n {β γ} {αs : Level ^ n} {Xs : Sets αs β} {Z : {!!}}
-> {!!} -> (f : Fold Xs) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
Recall that f previously had type Xs ->ⁿ Y, but Y now is hidden in the end of these nested dependent pairs and can depend on an element of any X from Xs. Z previously had type Y -> Set γ, hence now we need to append Set γ to Xs, making all xs implicit:
_⋯>ⁿ_ : ∀ {n} {αs : Level ^ n} {β γ}
-> Sets αs β -> Set γ -> Set (αs ⊔ⁿ β ⊔ γ)
_⋯>ⁿ_ {0} Y Z = Y -> Z
_⋯>ⁿ_ {suc _} (_ , F) Z = ∀ {x} -> F x ⋯>ⁿ Z
OK, Z : Xs ⋯>ⁿ Set γ, what type has g? Previously it was (y : Y) -> Z y. Again we need to append something to nested dependent pairs, since Y is again hidden, only in a dependent way now:
Πⁿ : ∀ {n} {αs : Level ^ n} {β γ}
-> (Xs : Sets αs β) -> (Xs ⋯>ⁿ Set γ) -> Set (αs ⊔ⁿ β ⊔ γ)
Πⁿ {0} Y Z = (y : Y) -> Z y
Πⁿ {suc _} (_ , F) Z = ∀ {x} -> Πⁿ (F x) Z
And finally
Comp : ∀ n {αs : Level ^ n} {β γ} {Xs : Sets αs β}
-> (Xs ⋯>ⁿ Set γ) -> Fold Xs -> Set (αs ⊔ⁿ γ)
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
comp : ∀ n {β γ} {αs : Level ^ n} {Xs : Sets αs β} {Z : Xs ⋯>ⁿ Set γ}
-> Πⁿ Xs Z -> (f : Fold Xs) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
A test:
length : ∀ {α} {A : Set α} {n} -> Vec A n -> ℕ
length {n = n} _ = n
explicit-replicate : (A : Set) -> (n : ℕ) -> A -> Vec A n
explicit-replicate _ _ x = replicate x
foo : (A : Set) -> ℕ -> A -> ℕ
foo = comp 3 length explicit-replicate
test : foo Bool 5 true ≡ 5
test = refl
Note the dependency in the arguments and the resulting type of explicit-replicate. Besides, Set lies in Set₁, while ℕ and A lie in Set — this illustrates universe polymorphism.
Remarks
AFAIK, there is no comprehensible theory for implicit arguments, so I don't know, how all this stuff will work, when the second function (i.e. f) receives implicit arguments. This test:
foo' : ∀ {α} {A : Set α} -> ℕ -> A -> ℕ
foo' = comp 2 length (λ n -> replicate {n = n})
test' : foo' 5 true ≡ 5
test' = refl
is passed at least.
comp can't handle functions, if the universe, where some type lies, depends on a value. For example
explicit-replicate' : ∀ α -> (A : Set α) -> (n : ℕ) -> A -> Vec A n
explicit-replicate' _ _ _ x = replicate x
... because this would result in an invalid use of Setω ...
error : ∀ α -> (A : Set α) -> ℕ -> A -> ℕ
error = comp 4 length explicit-replicate'
But it's common for Agda, e.g. you can't apply explicit id to itself:
idₑ : ∀ α -> (A : Set α) -> A -> A
idₑ _ _ x = x
-- ... because this would result in an invalid use of Setω ...
error = idₑ _ _ idₑ
The code.

Formally and Informally describe the language of this grammar

I have a question I would like some help with:
Formally and informally describe the language of the following grammar G = (Σ, N, S, P):
Σ = {a,b,c}
N = {S,T,X}
S = S
P = {
S->aTXc,
S->bTc,
T->aTX,
T->bT,
TXX->T,
Tc->empty string,
TXc-a>
}
Moreover, briefly and informally explain how this grammar generates its language.
Hint: Use |w|x notation to describe the language of this grammar.
I believe the language of the grammar
S → bTc | aTXc
T → bT | aTX
TXX → T
Tc → λ
TXc → a
is (a|b)+. First, consider derivations from T using the T-productions:
T ⇒* bnT (T → bT)
T ⇒* anTXn (T → aTX)
where n > 0. Since T → bT and T → aTX can be applied in arbitrary order, it follows that
T ⇒* uTX|u|a
where u has the form (a|b)+ and |u|a ≥ 0. Next, consider the production TXX → T:
T ⇒* uTX|u|a ⇒* uTX1(|u|a (mod 2) ≡ 1)
where 1(P) = 1 if P and 0 otherwise. Putting this together with the S-productions, we have:
S ⇒ bTc ⇒* buTX1(|u|a (mod 2) ≡ 1)c ⇒ buTc ⇒ bu
S ⇒ aTXc ⇒* auTX1(|u|a (mod 2) ≡ 1)Xc ⇒ auTXc ⇒ aua
if |u|a (mod 2) ≡ 0 and
S ⇒ bTc ⇒* buTX1(|u|a (mod 2) ≡ 1)c ⇒ buTXc ⇒ bua
S ⇒ aTXc ⇒* auTX1(|u|a (mod 2) ≡ 1)Xc ⇒ auTXXc ⇒ auTc ⇒ au
if |u|a (mod 2) ≡ 1 where u has the form (a|b)+. In the last step of the preceding derivations, applying more T-productions does not generate strings having a different form. Thus, I believe that all derivable strings have the form (a|b)+. My concern is that you were instructed to use |u|a notation to describe the grammar's language, so this belief might be in error.

Resources