Understanding "well founded" proofs in Coq - theory

I'm writing a fixpoint that requires an integer to be incremented "towards" zero at every iteration. This is too complicated for Coq to recognize as a decreasing argument automatically and I'm trying prove that my fixpoint will terminate.
I have been copying (what I believe is) an example of a well-foundedness proof for a step function on Z from the standard library. (Here)
Require Import ZArith.Zwf.
Section wf_proof_wf_inc.
Variable c : Z.
Let Z_increment (z:Z) := (z + ((Z.sgn c) * (-1)))%Z.
Lemma Zwf_wf_inc : well_founded (Zwf c).
Proof.
unfold well_founded.
intros a.
Qed.
End wf_proof_wf_inc.
which creates the following context:
c : Z
wf_inc := fun z : Z => (z + Z.sgn c * -1)%Z : Z -> Z
a : Z
============================
Acc (Zwf c) a
My question is what does this goal actually mean?
I thought that the goal I'd have to prove for this would at least involve the step function that I want to show has the "well founded" property, "Z_increment".
The most useful explanation I have looked at is this but I've never worked with the list type that it uses and it doesn't explain what is meant by terms like "accessible".

Basically, you don't need to do a well founded proof, you just need to prove that your function decreases the (natural number) abs(z). More concretely, you can implement abs (z:Z) : nat := z_to_nat (z * Z.sgn z) (with some appropriate conversion to nat) and then use this as a measure with Function, something like Function foo z {measure abs z} := ....
The well founded business is for showing relations are well-founded: the idea is that you can prove your function terminates by showing it "decreases" some well-founded relation R (think of it as <); that is, the definition of f x makes recursive subcalls f y only when R y x. For this to work R has to be well-founded, which intuitively means it has no infinitely descending chains. CPDT's general recursion chapter as a really good explanation of how this really works.
How does this relate to what you're doing? The standard library proves that, for all lower bounds c, x < y is a well-founded relation in Z if additionally its only applied to y >= c. I don't think this applies to you - instead you move towards zero, so you can just decrease abs z with the usual < relation on nats. The standard library already has a proof that this relation is well founded, and that's what Function ... {measure ...} uses.

Related

Symbolic Math with n

Suppose we want to find summation formula for a sequence. The easiest one would be
x1 = 1; x2 = 2; ... ; xn = n; ...
We all know the sum of the first n items is (n+1)n/2.
My question is how to find the last formula using symbolic calculation with Sympy or matlab or any other software. The difficulty I have is how to deal with n. For example, if each entry in a sequence can be written as a function of n, such as, an = n^2, where n=1, 2, ... Now how do I use symbolic calculation to get a formula for a1+a2+...+an, please? Note I want a formula in terms of a general n without specify a value for n. Is this even possible? If so, how? Thank you!
The answer depends on the syntax of your symbolic math software. For example: here is the solution using Wolfram alpha: Sum[i, {i, 1, n}]
- n is just n - that's why it is called symbolic.
As #ChistianFries say, dealing with an arbitrary symbolic variable like your n is what symbolic math is all about.
In Matlab, you should start by reading up on the Symbolic Math toolbox and (optionally) MuPAD, which is a separate environment a bit like Mathematica. Symbolic summation can be accomplished in Matlab via symsum:
syms x n;
symsum(x,1,n)
which returns (n*(n + 1))/2 as expected. I recommend reading the documentation for symsum, and the other symbolic math functions, and trying out the examples provided.

Difference between Completely Non Trivial Dependency and Non Trivial Dependency

What is the difference between Non Trivial Functional Dependencies and Completely non trivial Dependencies?
As per as my search i found a confusing difference between the two. I visited http://www.tutorialspoint.com/dbms/database_normalization.htm
According to this,
Non-trivial: If an FD X → Y holds where Y is not subset of X, then it is called non-trivial FD.
Completely non-trivial: If an FD X → Y holds where x intersect Y = Φ, is said to be completely non-trivial FD.
Being Y not subset of X and X intersect Y = Φ. Don't they point towards the same.
Example: X={1,2,3,4}, Y={5,6}
here we see Y is not subset of X and also Y intersect X = Φ.
Then, My question is what is the reason behind saying difference between non trivial dependency and Completely non Trivial Dependency? If there is any difference, then what is that exact one. Please suggest me. I googled it, but not found the satisfactory one.
Suppose the following case:
P = { R1, R2 } and Q = { R1, R3 }
Then,
P->Q is non-trivial.
P->Q isn't completely non-trivial. (because the intersection of P and Q is = { R1 }
Don't think of X and Y as sets directly.They are attributes of a table. What that means is that :
when AB->BC then it is Non-Trivial FD(Here u can see that B is common in both sides. So that doesn't happens in complete non-trivial)
i.e.
when AB->CD here AB intersection CD is none unlike in the above case.This is the basic difference between Non-trivial and Complete non-trivial.

Viola jones weak classifier explanation

I have been trying to understand the paper by viola n jones on face detection. I am not totally sure what this equation's parameters mean from section 3
h(x, f, p, theta) = 1 ; if pf(x) < p theta
What I understood was feature (f) is the value that is obtained by running any of those 5 basic features explained in the beginning of the paper over integral image of x.
What I can't understand properly is the threshold 'theta' and polarity 'p'. Does this pmean positive image and negative image and can have value of +1 or -1? And how do I calculate theta. This equation is vital to boosting section so I can't go further. Please help if I am making myself clear enough.
You must understand that the weak classifier h uses a Haar-like feature f to classify an image subwindow x. The parameter p, if equal to -1, simply causes the inversion of the comparison sign of the condition if pf(x) < p theta.
The parameter theta is simply a threshold. Say, for instance, that p = +1. If f(x) < theta, then h(x, f, p, theta) = +1, i.e., the weak classifier considers x a face.

How can I prove if this language is regular or not?

How can I prove if this language is regular or not?
L = {an bn: n≥1} union {an bn+2: n≥1}
I'll give an approach and a sketch of a prove, there might be some holes in it that I believe you can fill yourself.
The idea is to use nerode's theorem - show that there are infinte number of equivalence groups for RL - and from the theorem you can derive that the language is irregular.
Define two types of sets:
G_j = {anb k | n-k = j , k≥1} for each j in
[-2,-1,0,1,...]
H_j = {aj } for each j in
[0,1,...]
G_illegal = {0,1}* / (G_j U H_j) [for each j in the specified range]
It is easy to see that for each x in G_illegal, and for each z in {a,b}*: xz is not in L.
So, for every x,y in G_illegal and for each z in {a,b}*: xz in L <-> yz in L.
Also, for each z in {a,b}* - and for each x,y in some G_j [same j for both]:
if z contains a, both xz and yz are not in L
if z = bj, then xz = an bk bj, and since k+j = n - xz is in L. Same applies for y, so yz is in L.
if z = bj+2, then xz = an bk bj+2, and since k+j+2 = n+2 - xz is in L. Same applies for y, so yz is in L.
otherwise, x is bi such that i≠j and i≠j+2, and you get that both xz and yz are not in L.
So, for every j and for every x,y in G_j and for each z in {a,b}*: xz in L <-> yz in L.
Prove the same for every H_j using the same approach.
Also, it is easy to show that for each x G_j U H_j, and for each y in G_illegal - for z = bj, xz is in L and yz is not in L.
For x in G_j, and y in H_i, for z = abj+1 - it is easy to see that xz is not in L and yz is in L.
It is easy to see that for x,y in G_j and G_i respectively or x, y in H_j, H_i - for z = bj: xz is in L while yz is not.
We just proved that the sets we created are actually the equivalence relations for RL from nerode's theorem, and since we have infinite number of these sets, each is an equivalence relation for RL [we have H_j and G_j for every j] - we can derive from nerode's theorem that the language L is irregular.
You could just use the pumping lemma for regular languages. It basically says that if you can find a string for any given integer n and any partition of this string into xyz such that |xy| <= n, and |y| > 0, then you can pump the y part of the string, and it has to stay in the language, that means, if xy^iz it's not in the language for some i, then the language is not regular.
The proof goes like this, kind of a adversary proof. Suppose someone tells you that this language is regular. Then ask him for a number n > 0. You build a convenient string of length greater than n, and you give to the adversary. He partitions the string in x, y z, in any way he wants, as long as |xy| <= n. Then you have to pump y (repeat it i times) until you find a string that is not in that same language.
In this case, I tell you: give me n. You fix n. The I tell you: take the string "a^n b^{n+2}", and tell you to split it. In any way that you can split this string, you will always have to make y = a^k, with k > 0, since you are force to make |xy| <= n, and my string begins with a^n. Here is the trick, you give the adversary a string such that any way he can split it, he gives you a part that you can pump. So now we pump y, let's say, 0 times, and you get "a^m b^{n+2}" with m < n, which is not in your language. Done. We can also pump a 1 time, n times, n! factorial times, anything you need to make it leave the language.
The proof of this theorem goes around saying that if you have a regular language then you have an automaton with n states for some fixed n. If a string has more than n characters, then it must go through some cycle in your automaton. If we name x the part of the string before entering the cycle, and y the part in the cycle, it's clear that we can pump y as many times as we want, because we can keep running on the cycle as many times as we want, and the resulting string has to be in the language, because it will be recognized by that automaton. To use the theorem to prove for non-regularity, since we don't know how the supposed automaton will be, we have to leave to the adversary the choose for n and for the position of the cycle inside the automaton (there will be no automaton, but you say to the adversary something like: dare to give me an automaton and I will show you it cannot exist.)

What does 'zero of the function will be found within the precision limit ϵ = 10 - 3' mean in this?

Well, the question is; "Write a C code that finds zero of a function y = ax + b, without solving the equation. The zero will be found within the precision limit ϵ = 10 - 3. You'll start at x=0, and move x in the proper direction until |y|< ϵ."
I'm a newbie, to programming, and don't know anything about this ϵ thing either.
Help me out!!
It means you have to solve the inequality |ax+b| < 10^-3 by trying different values for x.
Since this is a linear function it's easy. Start with a random number at x and then increase it or decrease it depending on the result of ax+b. I.e. if you move to one direction and the results go more away then you should follow the opposite direction.
You will have to develop an algorithm that decides the increments/decrements of x.
|y| < 10⁻³, or well, -0.001 < y < 0.001.
You must increase or decrease x (starting from 0, as you've said) in order to make y to take a value between -0.001 and 0.001.
About ϵ, a.k.a. epsilon, is used to denote a very small value. For this problem, ϵ denotes a tolerance value, as it's not needed y to take a strict value of 0.

Resources