How can I prove if this language is regular or not? - dfa

How can I prove if this language is regular or not?
L = {an bn: n≥1} union {an bn+2: n≥1}

I'll give an approach and a sketch of a prove, there might be some holes in it that I believe you can fill yourself.
The idea is to use nerode's theorem - show that there are infinte number of equivalence groups for RL - and from the theorem you can derive that the language is irregular.
Define two types of sets:
G_j = {anb k | n-k = j , k≥1} for each j in
[-2,-1,0,1,...]
H_j = {aj } for each j in
[0,1,...]
G_illegal = {0,1}* / (G_j U H_j) [for each j in the specified range]
It is easy to see that for each x in G_illegal, and for each z in {a,b}*: xz is not in L.
So, for every x,y in G_illegal and for each z in {a,b}*: xz in L <-> yz in L.
Also, for each z in {a,b}* - and for each x,y in some G_j [same j for both]:
if z contains a, both xz and yz are not in L
if z = bj, then xz = an bk bj, and since k+j = n - xz is in L. Same applies for y, so yz is in L.
if z = bj+2, then xz = an bk bj+2, and since k+j+2 = n+2 - xz is in L. Same applies for y, so yz is in L.
otherwise, x is bi such that i≠j and i≠j+2, and you get that both xz and yz are not in L.
So, for every j and for every x,y in G_j and for each z in {a,b}*: xz in L <-> yz in L.
Prove the same for every H_j using the same approach.
Also, it is easy to show that for each x G_j U H_j, and for each y in G_illegal - for z = bj, xz is in L and yz is not in L.
For x in G_j, and y in H_i, for z = abj+1 - it is easy to see that xz is not in L and yz is in L.
It is easy to see that for x,y in G_j and G_i respectively or x, y in H_j, H_i - for z = bj: xz is in L while yz is not.
We just proved that the sets we created are actually the equivalence relations for RL from nerode's theorem, and since we have infinite number of these sets, each is an equivalence relation for RL [we have H_j and G_j for every j] - we can derive from nerode's theorem that the language L is irregular.

You could just use the pumping lemma for regular languages. It basically says that if you can find a string for any given integer n and any partition of this string into xyz such that |xy| <= n, and |y| > 0, then you can pump the y part of the string, and it has to stay in the language, that means, if xy^iz it's not in the language for some i, then the language is not regular.
The proof goes like this, kind of a adversary proof. Suppose someone tells you that this language is regular. Then ask him for a number n > 0. You build a convenient string of length greater than n, and you give to the adversary. He partitions the string in x, y z, in any way he wants, as long as |xy| <= n. Then you have to pump y (repeat it i times) until you find a string that is not in that same language.
In this case, I tell you: give me n. You fix n. The I tell you: take the string "a^n b^{n+2}", and tell you to split it. In any way that you can split this string, you will always have to make y = a^k, with k > 0, since you are force to make |xy| <= n, and my string begins with a^n. Here is the trick, you give the adversary a string such that any way he can split it, he gives you a part that you can pump. So now we pump y, let's say, 0 times, and you get "a^m b^{n+2}" with m < n, which is not in your language. Done. We can also pump a 1 time, n times, n! factorial times, anything you need to make it leave the language.
The proof of this theorem goes around saying that if you have a regular language then you have an automaton with n states for some fixed n. If a string has more than n characters, then it must go through some cycle in your automaton. If we name x the part of the string before entering the cycle, and y the part in the cycle, it's clear that we can pump y as many times as we want, because we can keep running on the cycle as many times as we want, and the resulting string has to be in the language, because it will be recognized by that automaton. To use the theorem to prove for non-regularity, since we don't know how the supposed automaton will be, we have to leave to the adversary the choose for n and for the position of the cycle inside the automaton (there will be no automaton, but you say to the adversary something like: dare to give me an automaton and I will show you it cannot exist.)

Related

Understanding "well founded" proofs in Coq

I'm writing a fixpoint that requires an integer to be incremented "towards" zero at every iteration. This is too complicated for Coq to recognize as a decreasing argument automatically and I'm trying prove that my fixpoint will terminate.
I have been copying (what I believe is) an example of a well-foundedness proof for a step function on Z from the standard library. (Here)
Require Import ZArith.Zwf.
Section wf_proof_wf_inc.
Variable c : Z.
Let Z_increment (z:Z) := (z + ((Z.sgn c) * (-1)))%Z.
Lemma Zwf_wf_inc : well_founded (Zwf c).
Proof.
unfold well_founded.
intros a.
Qed.
End wf_proof_wf_inc.
which creates the following context:
c : Z
wf_inc := fun z : Z => (z + Z.sgn c * -1)%Z : Z -> Z
a : Z
============================
Acc (Zwf c) a
My question is what does this goal actually mean?
I thought that the goal I'd have to prove for this would at least involve the step function that I want to show has the "well founded" property, "Z_increment".
The most useful explanation I have looked at is this but I've never worked with the list type that it uses and it doesn't explain what is meant by terms like "accessible".
Basically, you don't need to do a well founded proof, you just need to prove that your function decreases the (natural number) abs(z). More concretely, you can implement abs (z:Z) : nat := z_to_nat (z * Z.sgn z) (with some appropriate conversion to nat) and then use this as a measure with Function, something like Function foo z {measure abs z} := ....
The well founded business is for showing relations are well-founded: the idea is that you can prove your function terminates by showing it "decreases" some well-founded relation R (think of it as <); that is, the definition of f x makes recursive subcalls f y only when R y x. For this to work R has to be well-founded, which intuitively means it has no infinitely descending chains. CPDT's general recursion chapter as a really good explanation of how this really works.
How does this relate to what you're doing? The standard library proves that, for all lower bounds c, x < y is a well-founded relation in Z if additionally its only applied to y >= c. I don't think this applies to you - instead you move towards zero, so you can just decrease abs z with the usual < relation on nats. The standard library already has a proof that this relation is well founded, and that's what Function ... {measure ...} uses.

How to solve system of particular equations in C (probably using cycles)

I´ve run into problem while working on my school project in C.
I am working with two equtions:
x=a+ui ;
y=b+vj
Known is the value of a, b, u, v. I need to multiply u, v with natural numbers (i, j) until equality x=y occurs.
(Note: i and j may not have a same value/ could be a different numbers while x=y).
And here comes the problem, I dont know how to solve the equations while i and j differs, i suppose that possible solution is by cycles, or diofantic equations (I dont know how to apply them to the equations above.)
I am beginner and stuck. Can someone help me with solution in C code, please? Thank you.
EDIT
If i just want to solve the equations by the cycles,
I think I know what should I do, but I just dont know how to write it in C..
1) every step of cycle, I count x, y
2) if x is lower then y, i is increased by 1, if not, j is increased by 1
3) its repeating until x=y
4) sometimes the solution is not there at all, so I have to put there a condition so it doesnt run forever.
Rewriting your equations, you get:
a + u i = b + v j
With the solution
i = (b + v j - a) / u
Now you need to find a natural number j, such that i becomes natural. Obviously, b + v j - a has to be a positive multiple of u. So:
b + v j - a = k * u, k \in N
j = (k * u + a - b) / v
Now k * u + a - b must be a multiple of v. This is not always possible. The easiest way to check is to iterate the first few possible ks and see where it gets you. If you get a natural number, you can plug the solution of j into the above equation and you are guaranteed to get a natural i.
This assumes that x, y, a, b, u, and v are real numbers. If they are also integers, you might get a bit farther than that.

C Program to detect right angled triangles

If I am given 100 points in the coordinate system, and I have to find if there exist a right angled triangle in those vertices.
Is there a way that I can detect the right angled triangle among those vertices without having to choose all pairs of 3 vertices and then applying Pythagoras theorem on them??
Can there be a better algorithm for this?
Thanks for any help. :)
Here's an O(n^2 log n)-time algorithm for two dimensions only. I'll describe what goes wrong in higher dimensions.
Let S be the set of points, which have integer coordinates. For each point o in S, construct the set of nonzero vectors V(o) = {p - o | p in S - {o}} and test whether V(o) contains two orthogonal vectors in linear time as follows.
Method 1: canonize each vector (x, y) to (x/gcd(x, y), y/gcd(x, y)), where |gcd(x, y)| is the largest integer that divides both x and y, and where gcd(x, y) is negative if y is negative, positive if y is positive, and |x| if y is zero. (This is very similar to putting a fraction in lowest terms.) The key fact about two dimensions is that, for each nonzero vector, there exists exactly one canonical vector orthogonal to that vector, specifically, the canonization of (-y, x). Insert the canonization of each vector in V(o) into a set data structure and then, for each vector in V(o), look up its canonical orthogonal mate in that data structure. I'm assuming that the gcd and/or set operations take time O(log n).
Method 2: define a comparator on vectors as follows. Given vectors (a, b), (c, d), write (a, b) < (c, d) if and only if
s1 s2 (a d - b c) < 0,
where
s1 = -1 if b < 0 or (b == 0 and a < 0)
1 otherwise
s2 = -1 if d < 0 or (d == 0 and c < 0)
1 otherwise.
Sort the vectors using this comparator. (This is very similar to comparing the fraction a/b with c/d.) For each vector (x, y) in V(o), binary search for its orthogonal mate (-y, x).
In three dimensions, the set of vectors orthogonal to the unit vector along the z-axis is the entire x-y-plane, and the equivalent of canonization fails to map all vectors in this plane to one orthogonal mate.

Viola jones weak classifier explanation

I have been trying to understand the paper by viola n jones on face detection. I am not totally sure what this equation's parameters mean from section 3
h(x, f, p, theta) = 1 ; if pf(x) < p theta
What I understood was feature (f) is the value that is obtained by running any of those 5 basic features explained in the beginning of the paper over integral image of x.
What I can't understand properly is the threshold 'theta' and polarity 'p'. Does this pmean positive image and negative image and can have value of +1 or -1? And how do I calculate theta. This equation is vital to boosting section so I can't go further. Please help if I am making myself clear enough.
You must understand that the weak classifier h uses a Haar-like feature f to classify an image subwindow x. The parameter p, if equal to -1, simply causes the inversion of the comparison sign of the condition if pf(x) < p theta.
The parameter theta is simply a threshold. Say, for instance, that p = +1. If f(x) < theta, then h(x, f, p, theta) = +1, i.e., the weak classifier considers x a face.

Implementing sum (i=0 to k) of X^i * Y in CUDA C

I'm looking for tips or research papers that will help me perform the sum (i=0 to k) of X^i * Y, or more explicitly, Y + X^1 * Y +...+ X^k * Y in CUDA C. Where X is an N-by-N matrix, and Y is a N-by-1 vector
You should check out Thrust.
Factor out the Y, then just do a scan (using multiplication as the operator) followed by a reduction (using addition as the operator).
I know this isn't what you're looking for, but can't you factor the Y out and just right multiply it with the result of sum(i=0 to k) of X^i?
Besides factoring out Y from the summation, you could compute the eigenspace of X and subsequently very efficiently compute each X^i (the slowest part of computing your summation will undoubtedly be raising X to a range of powers, so I'll attack that).
More specifically, compute the eigenvalues of X and form a diagonal matrix of the eigenvalues, call this Q. Using the eigenvalues, we can diagonalize X and create a new matrix D such that
(1) D = Q^-1 X Q
Because D is diagonal, we can very efficiently compute it raised to any power i. Applying (1) we determine that
(2) D^i = (Q^-1 X Q)^i
and furthermore, we can show that (2) is equivalent to
(3) D^i = Q^-1 X^i Q
Finally, we can find any arbitrary X^i efficiently by rearranging our equation and computing
(4) X^i = Q D^i Q^-1
(I wanted to verify my memory here, so I found a reference on Wikipedia).

Resources