Push Down Automata - theory

designing a pushdown automata for the language a^n b c^n+2, n>0
I have been asked to implement the automata for the above language .. please help?
I tried popping a 2 (c)s everytime I push an (a) on to the stack but it seems not to work with odd number of (a)s ....

You must process the a's in the normal way, ie, for each to read from the tape you stack A, until you finish reading the a's, if you read a b, leave the top of the stack as it is, finally you must process all C's. The transition function is:
(q0, a, Z) = (q0, AZ)
(q0, a, A) = (q0, AA)
(q0, b, A) = (q1, A)
(q1, c, A) = (q1, epsilon) (until the amount of a's are equal to the amount of c's)
(q1, c, Z)= (q2, Z) (read the first extra c)
(q2, c, Z)= (q3, Z) (read the second extra c)
(q3, epsilon, Z)= (qf, Z) (qf is the final state)
The graphic representation of the PDA is:

Related

Time complexity of GCD code

I am trying to understand the time complexity for the below code:
int gcd(int n, int m) {
if (n%m ==0) return m;
if (n < m) swap(n, m);
while (m > 0) {
n = n%m;
swap(n, m);
}
return n;
}
I read the complexity of the above code is Θ(logn). Can some one please explain me the logic behind it?
Consider any two steps of the algorithm.
At some point, you have the numbers (a, b) with a > b. After the first step these turn to (b, c) with c = a mod b, and after the second step the two numbers will be (c, d) with d = b mod c.
Now think backwards. As d = b mod c, we know that b = kc + d for some k > 0. The smallest possibility is k = 1, therefore b ≥ 1c + d = c + d.
From that result and from a > b we get a > c + d. If we add the last two inequalities we just derived, we get that (a + b) > 2(c + d).
In words, after each two consecutive steps the sum of the two numbers decreases to less than half of its original size.
Now look at the very first step of the algorithm. At the beginning, we have some (a, b) with a > b. After the very first step we have (b, c) with c = a mod b, and clearly b > c. Hence, after the very first step both numbers are at most equal to the smaller of the two input numbers.
Putting those two results together, we may conclude that the number of iterations (recursive calls in other implementations) is at most logarithmic in the smaller input number. In other words, the number of iterations is at most linear in the number of digits in the smaller input number.

Turning a recursive procedure to a iterative procedure - SICP exercise 1.16

In the book Structure and interpretation of computer programs, there is a recursive procedure for computing exponents using successive squaring.
(define (fast-expt b n)
(cond ((= n 0)
1)
((even? n)
(square (fast-expt b (/ n 2))))
(else
(* b (fast-expt b (- n 1))))))
Now in exercise 1.16:
Exercise 1.16: Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps,
as does `fast-expt`. (Hint: Using the observation that
(b(^n/2))^2 = (b(^2))^n/2
, keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product ab^n is unchanged from state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of a at the end of the process. In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
I spent a week and I absolutely can't figure how to do this iterative procedure, so I gave up and looked for solutions. All solutions I found is this:
(define (fast-expt a b n)
(cond ((= n 0)
a)
((even? n)
(fast-expt a (square b) (/ n 2)))
(else
(fast-expt (* a b) b (- n 1)))))
Now, I can understand
(fast-expt a (square b) (/ n 2)))
using the hint from the book, but my brain exploded when n is odd. In the recursive procedure, I got why
(* b (fast-expt b (- n 1))))))
works. But in the iterative procedure, it becomes totally different,
(fast-expt (* a b) b (- n 1)))))
It's working perfectly but I absolutely don't understand how to arrive at this solution by myself. it seems extremely clever.
Can someone explain why the iterative solution is like this? And what's the general way to think of solving these types of problems?
2021 update: Last year, I completely forgot about this exercise and the solutions I've seen. I tried solving it and I finally solved it on my own using the invariant provided in the exercise as a basis for transforming the state variables. I used the now accepted answer to verify my solution. Thanks #Óscar López.
Here's a slightly different implementation for making things clearer, notice that I'm using a helper procedure called loop to preserve the original procedure's arity:
(define (fast-expt b n)
(define (loop b n acc)
(cond ((zero? n) acc)
((even? n) (loop (* b b) (/ n 2) acc))
(else (loop b (- n 1) (* b acc)))))
(loop b n 1))
What's acc in here? it's a parameter that is used as an accumulator for the results (in the book they name this parameter a, IMHO acc is a more descriptive name). So at the beginning we set acc to an appropriate value and afterwards in each iteration we update the accumulator, preserving the invariant.
In general, this is the "trick" for understanding an iterative, tail-recursive implementation of an algorithm: we pass along an extra parameter with the result we've calculated so far, and return it in the end when we reach the base case of the recursion. By the way, the usual implementation of an iterative procedure as the one shown above is to use a named let, this is completely equivalent and a bit simpler to write:
(define (fast-expt b n)
(let loop ((b b) (n n) (acc 1))
(cond ((zero? n) acc)
((even? n) (loop (* b b) (/ n 2) acc))
(else (loop b (- n 1) (* b acc))))))

obtaining two large numbers and reducing their difference by taking their mod values iteratively in C

Well, let us say i am supposed to obtain D = (A - B) mod M, where A and B are very large, large as in, long long wont help. A and B were obtained iteratively and independently and i would obtain A mod M at each iteration and B mod M at each iteration. Now, let us say B is always smaller than A, but (B mod M) can be larger than (A mod M), then when D is evaluated, a negative number will be obtained, which isnt right, because, well, B is smaller than A. How do I go about this? Thanx in advance.
If ((A mod M) - (B mod M)) mod M gives you a negative result (as it may, since older C leaves the result of modulus implementation-defined when either argument is negative, and C99 defines it so that the result is negative if the dividend is negative), simply add M to get the result you want. After all, x and x+M are equivalent, mod M.

C Program to detect right angled triangles

If I am given 100 points in the coordinate system, and I have to find if there exist a right angled triangle in those vertices.
Is there a way that I can detect the right angled triangle among those vertices without having to choose all pairs of 3 vertices and then applying Pythagoras theorem on them??
Can there be a better algorithm for this?
Thanks for any help. :)
Here's an O(n^2 log n)-time algorithm for two dimensions only. I'll describe what goes wrong in higher dimensions.
Let S be the set of points, which have integer coordinates. For each point o in S, construct the set of nonzero vectors V(o) = {p - o | p in S - {o}} and test whether V(o) contains two orthogonal vectors in linear time as follows.
Method 1: canonize each vector (x, y) to (x/gcd(x, y), y/gcd(x, y)), where |gcd(x, y)| is the largest integer that divides both x and y, and where gcd(x, y) is negative if y is negative, positive if y is positive, and |x| if y is zero. (This is very similar to putting a fraction in lowest terms.) The key fact about two dimensions is that, for each nonzero vector, there exists exactly one canonical vector orthogonal to that vector, specifically, the canonization of (-y, x). Insert the canonization of each vector in V(o) into a set data structure and then, for each vector in V(o), look up its canonical orthogonal mate in that data structure. I'm assuming that the gcd and/or set operations take time O(log n).
Method 2: define a comparator on vectors as follows. Given vectors (a, b), (c, d), write (a, b) < (c, d) if and only if
s1 s2 (a d - b c) < 0,
where
s1 = -1 if b < 0 or (b == 0 and a < 0)
1 otherwise
s2 = -1 if d < 0 or (d == 0 and c < 0)
1 otherwise.
Sort the vectors using this comparator. (This is very similar to comparing the fraction a/b with c/d.) For each vector (x, y) in V(o), binary search for its orthogonal mate (-y, x).
In three dimensions, the set of vectors orthogonal to the unit vector along the z-axis is the entire x-y-plane, and the equivalent of canonization fails to map all vectors in this plane to one orthogonal mate.

Integrate a function between a and b with n intervals

I'm totally stuck with that, i don't know where to start.
I have to Integrate a function between a and b with n intervals in C.
I only have the function definition :
float funcintegrate(float (*f)(float x), float a, float b, int n);
I need to use the trapezoidal method.
EDIT :
Thanks all for you tips. I have now the answer !
Numerically integrate the function in the interval [a, b] using trapezoidal method (or rule) :
float funcintegrate(float (*f)(float x), float a, float b, int n);
int i;
double x;
double k = (b - a) / n;
double s = 0.5 * (f(a) + f(b));
for (i = 1; i < n; i++) {
x = a + k * i;
s = s + f(x);
}
return s * k;
}
Your funcintegrate() function should basically divide the [a, b] interval into n subintervals, calculate the value of f() on all subinterval endpoints, then use them to compute a value of certain expression for each of the subintervals and finally add up all the values of that expression.
The subexpression to calculate in each iteration depends on the particular numerical integration method you have chosen and impacts performance and accuracy trade-off.
In the simplest case the expression is the product of the value of f() at one of the endpoints multiplied by the length of the subinterval. This corresponds to "bar-chart approximation" to the field under the curve. It is very inaccurate and once you successfully implement it you should try more complex methods.
These two wikipedia articles give a good description of a number of different methods each sharing the general algorithm I described above: Euler method, Runge-Kutta methods.
I also recommend reading relevant chapters in "Numerical Recipes in C".
EDIT: Trapezoidal method uses for each subinterval an expression which represents the area of a trapezoid which is based on the subinterval and extends upwards (or downwards depending on the sign of f()) until its vertical sides cross the curve. The two intersection points are their connected with a straight line (which is where approximation is made since f() may not be straight between the points).

Resources