Deleting linked list nodes without recursion [duplicate] - c

I've used recursion quite a lot on my many years of programming to solve simple problems, but I'm fully aware that sometimes you need iteration due to memory/speed problems.
So, sometime in the very far past I went to try and find if there existed any "pattern" or text-book way of transforming a common recursion approach to iteration and found nothing. Or at least nothing that I can remember it would help.
Are there general rules?
Is there a "pattern"?

Usually, I replace a recursive algorithm by an iterative algorithm by pushing the parameters that would normally be passed to the recursive function onto a stack. In fact, you are replacing the program stack by one of your own.
var stack = [];
stack.push(firstObject);
// while not empty
while (stack.length) {
// Pop off end of stack.
obj = stack.pop();
// Do stuff.
// Push other objects on the stack as needed.
...
}
Note: if you have more than one recursive call inside and you want to preserve the order of the calls, you have to add them in the reverse order to the stack:
foo(first);
foo(second);
has to be replaced by
stack.push(second);
stack.push(first);
Edit: The article Stacks and Recursion Elimination (or Article Backup link) goes into more details on this subject.

Really, the most common way to do it is to keep your own stack. Here's a recursive quicksort function in C:
void quicksort(int* array, int left, int right)
{
if(left >= right)
return;
int index = partition(array, left, right);
quicksort(array, left, index - 1);
quicksort(array, index + 1, right);
}
Here's how we could make it iterative by keeping our own stack:
void quicksort(int *array, int left, int right)
{
int stack[1024];
int i=0;
stack[i++] = left;
stack[i++] = right;
while (i > 0)
{
right = stack[--i];
left = stack[--i];
if (left >= right)
continue;
int index = partition(array, left, right);
stack[i++] = left;
stack[i++] = index - 1;
stack[i++] = index + 1;
stack[i++] = right;
}
}
Obviously, this example doesn't check stack boundaries... and really you could size the stack based on the worst case given left and and right values. But you get the idea.

It seems nobody has addressed where the recursive function calls itself more than once in the body, and handles returning to a specific point in the recursion (i.e. not primitive-recursive). It is said that every recursion can be turned into iteration, so it appears that this should be possible.
I just came up with a C# example of how to do this. Suppose you have the following recursive function, which acts like a postorder traversal, and that AbcTreeNode is a 3-ary tree with pointers a, b, c.
public static void AbcRecursiveTraversal(this AbcTreeNode x, List<int> list) {
if (x != null) {
AbcRecursiveTraversal(x.a, list);
AbcRecursiveTraversal(x.b, list);
AbcRecursiveTraversal(x.c, list);
list.Add(x.key);//finally visit root
}
}
The iterative solution:
int? address = null;
AbcTreeNode x = null;
x = root;
address = A;
stack.Push(x);
stack.Push(null)
while (stack.Count > 0) {
bool #return = x == null;
if (#return == false) {
switch (address) {
case A://
stack.Push(x);
stack.Push(B);
x = x.a;
address = A;
break;
case B:
stack.Push(x);
stack.Push(C);
x = x.b;
address = A;
break;
case C:
stack.Push(x);
stack.Push(null);
x = x.c;
address = A;
break;
case null:
list_iterative.Add(x.key);
#return = true;
break;
}
}
if (#return == true) {
address = (int?)stack.Pop();
x = (AbcTreeNode)stack.Pop();
}
}

Strive to make your recursive call Tail Recursion (recursion where the last statement is the recursive call). Once you have that, converting it to iteration is generally pretty easy.

Well, in general, recursion can be mimicked as iteration by simply using a storage variable. Note that recursion and iteration are generally equivalent; one can almost always be converted to the other. A tail-recursive function is very easily converted to an iterative one. Just make the accumulator variable a local one, and iterate instead of recurse. Here's an example in C++ (C were it not for the use of a default argument):
// tail-recursive
int factorial (int n, int acc = 1)
{
if (n == 1)
return acc;
else
return factorial(n - 1, acc * n);
}
// iterative
int factorial (int n)
{
int acc = 1;
for (; n > 1; --n)
acc *= n;
return acc;
}
Knowing me, I probably made a mistake in the code, but the idea is there.

Even using stack will not convert a recursive algorithm into iterative. Normal recursion is function based recursion and if we use stack then it becomes stack based recursion. But its still recursion.
For recursive algorithms, space complexity is O(N) and time complexity is O(N).
For iterative algorithms, space complexity is O(1) and time complexity is O(N).
But if we use stack things in terms of complexity remains same. I think only tail recursion can be converted into iteration.

The stacks and recursion elimination article captures the idea of externalizing the stack frame on heap, but does not provide a straightforward and repeatable way to convert. Below is one.
While converting to iterative code, one must be aware that the recursive call may happen from an arbitrarily deep code block. Its not just the parameters, but also the point to return to the logic that remains to be executed and the state of variables which participate in subsequent conditionals, which matter. Below is a very simple way to convert to iterative code with least changes.
Consider this recursive code:
struct tnode
{
tnode(int n) : data(n), left(0), right(0) {}
tnode *left, *right;
int data;
};
void insertnode_recur(tnode *node, int num)
{
if(node->data <= num)
{
if(node->right == NULL)
node->right = new tnode(num);
else
insertnode(node->right, num);
}
else
{
if(node->left == NULL)
node->left = new tnode(num);
else
insertnode(node->left, num);
}
}
Iterative code:
// Identify the stack variables that need to be preserved across stack
// invocations, that is, across iterations and wrap them in an object
struct stackitem
{
stackitem(tnode *t, int n) : node(t), num(n), ra(0) {}
tnode *node; int num;
int ra; //to point of return
};
void insertnode_iter(tnode *node, int num)
{
vector<stackitem> v;
//pushing a stackitem is equivalent to making a recursive call.
v.push_back(stackitem(node, num));
while(v.size())
{
// taking a modifiable reference to the stack item makes prepending
// 'si.' to auto variables in recursive logic suffice
// e.g., instead of num, replace with si.num.
stackitem &si = v.back();
switch(si.ra)
{
// this jump simulates resuming execution after return from recursive
// call
case 1: goto ra1;
case 2: goto ra2;
default: break;
}
if(si.node->data <= si.num)
{
if(si.node->right == NULL)
si.node->right = new tnode(si.num);
else
{
// replace a recursive call with below statements
// (a) save return point,
// (b) push stack item with new stackitem,
// (c) continue statement to make loop pick up and start
// processing new stack item,
// (d) a return point label
// (e) optional semi-colon, if resume point is an end
// of a block.
si.ra=1;
v.push_back(stackitem(si.node->right, si.num));
continue;
ra1: ;
}
}
else
{
if(si.node->left == NULL)
si.node->left = new tnode(si.num);
else
{
si.ra=2;
v.push_back(stackitem(si.node->left, si.num));
continue;
ra2: ;
}
}
v.pop_back();
}
}
Notice how the structure of the code still remains true to the recursive logic and modifications are minimal, resulting in less number of bugs. For comparison, I have marked the changes with ++ and --. Most of the new inserted blocks except v.push_back, are common to any converted iterative logic
void insertnode_iter(tnode *node, int num)
{
+++++++++++++++++++++++++
vector<stackitem> v;
v.push_back(stackitem(node, num));
while(v.size())
{
stackitem &si = v.back();
switch(si.ra)
{
case 1: goto ra1;
case 2: goto ra2;
default: break;
}
------------------------
if(si.node->data <= si.num)
{
if(si.node->right == NULL)
si.node->right = new tnode(si.num);
else
{
+++++++++++++++++++++++++
si.ra=1;
v.push_back(stackitem(si.node->right, si.num));
continue;
ra1: ;
-------------------------
}
}
else
{
if(si.node->left == NULL)
si.node->left = new tnode(si.num);
else
{
+++++++++++++++++++++++++
si.ra=2;
v.push_back(stackitem(si.node->left, si.num));
continue;
ra2: ;
-------------------------
}
}
+++++++++++++++++++++++++
v.pop_back();
}
-------------------------
}

Search google for "Continuation passing style." There is a general procedure for converting to a tail recursive style; there is also a general procedure for turning tail recursive functions into loops.

Just killing time... A recursive function
void foo(Node* node)
{
if(node == NULL)
return;
// Do something with node...
foo(node->left);
foo(node->right);
}
can be converted to
void foo(Node* node)
{
if(node == NULL)
return;
// Do something with node...
stack.push(node->right);
stack.push(node->left);
while(!stack.empty()) {
node1 = stack.pop();
if(node1 == NULL)
continue;
// Do something with node1...
stack.push(node1->right);
stack.push(node1->left);
}
}

Thinking of things that actually need a stack:
If we consider the pattern of recursion as:
if(task can be done directly) {
return result of doing task directly
} else {
split task into two or more parts
solve for each part (possibly by recursing)
return result constructed by combining these solutions
}
For example, the classic Tower of Hanoi
if(the number of discs to move is 1) {
just move it
} else {
move n-1 discs to the spare peg
move the remaining disc to the target peg
move n-1 discs from the spare peg to the target peg, using the current peg as a spare
}
This can be translated into a loop working on an explicit stack, by restating it as:
place seed task on stack
while stack is not empty
take a task off the stack
if(task can be done directly) {
Do it
} else {
Split task into two or more parts
Place task to consolidate results on stack
Place each task on stack
}
}
For Tower of Hanoi this becomes:
stack.push(new Task(size, from, to, spare));
while(! stack.isEmpty()) {
task = stack.pop();
if(task.size() = 1) {
just move it
} else {
stack.push(new Task(task.size() -1, task.spare(), task,to(), task,from()));
stack.push(new Task(1, task.from(), task.to(), task.spare()));
stack.push(new Task(task.size() -1, task.from(), task.spare(), task.to()));
}
}
There is considerable flexibility here as to how you define your stack. You can make your stack a list of Command objects that do sophisticated things. Or you can go the opposite direction and make it a list of simpler types (e.g. a "task" might be 4 elements on a stack of int, rather than one element on a stack of Task).
All this means is that the memory for the stack is in the heap rather than in the Java execution stack, but this can be useful in that you have more control over it.

Generally the technique to avoid stack overflow is for recursive functions is called trampoline technique which is widely adopted by Java devs.
However, for C# there is a little helper method here that turns your recursive function to iterative without requiring to change logic or make the code in-comprehensible. C# is such a nice language that amazing stuff is possible with it.
It works by wrapping parts of the method by a helper method. For example the following recursive function:
int Sum(int index, int[] array)
{
//This is the termination condition
if (int >= array.Length)
//This is the returning value when termination condition is true
return 0;
//This is the recursive call
var sumofrest = Sum(index+1, array);
//This is the work to do with the current item and the
//result of recursive call
return array[index]+sumofrest;
}
Turns into:
int Sum(int[] ar)
{
return RecursionHelper<int>.CreateSingular(i => i >= ar.Length, i => 0)
.RecursiveCall((i, rv) => i + 1)
.Do((i, rv) => ar[i] + rv)
.Execute(0);
}

One pattern to look for is a recursion call at the end of the function (so called tail-recursion). This can easily be replaced with a while. For example, the function foo:
void foo(Node* node)
{
if(node == NULL)
return;
// Do something with node...
foo(node->left);
foo(node->right);
}
ends with a call to foo. This can be replaced with:
void foo(Node* node)
{
while(node != NULL)
{
// Do something with node...
foo(node->left);
node = node->right;
}
}
which eliminates the second recursive call.

A question that had been closed as a duplicate of this one had a very specific data structure:
The node had the following structure:
typedef struct {
int32_t type;
int32_t valueint;
double valuedouble;
struct cNODE *next;
struct cNODE *prev;
struct cNODE *child;
} cNODE;
The recursive deletion function looked like:
void cNODE_Delete(cNODE *c) {
cNODE*next;
while (c) {
next=c->next;
if (c->child) {
cNODE_Delete(c->child)
}
free(c);
c=next;
}
}
In general, it is not always possible to avoid a stack for recursive functions that invoke itself more than one time (or even once). However, for this particular structure, it is possible. The idea is to flatten all the nodes into a single list. This is accomplished by putting the current node's child at the end of the top row's list.
void cNODE_Delete (cNODE *c) {
cNODE *tmp, *last = c;
while (c) {
while (last->next) {
last = last->next; /* find last */
}
if ((tmp = c->child)) {
c->child = NULL; /* append child to last */
last->next = tmp;
tmp->prev = last;
}
tmp = c->next; /* remove current */
free(c);
c = tmp;
}
}
This technique can be applied to any data linked structure that can be reduce to a DAG with a deterministic topological ordering. The current nodes children are rearranged so that the last child adopts all of the other children. Then the current node can be deleted and traversal can then iterate to the remaining child.

Recursion is nothing but the process of calling of one function from the other only this process is done by calling of a function by itself. As we know when one function calls the other function the first function saves its state(its variables) and then passes the control to the called function. The called function can be called by using the same name of variables ex fun1(a) can call fun2(a).
When we do recursive call nothing new happens. One function calls itself by passing the same type and similar in name variables(but obviously the values stored in variables are different,only the name remains same.)to itself. But before every call the function saves its state and this process of saving continues. The SAVING IS DONE ON A STACK.
NOW THE STACK COMES INTO PLAY.
So if you write an iterative program and save the state on a stack each time and then pop out the values from stack when needed, you have successfully converted a recursive program into an iterative one!
The proof is simple and analytical.
In recursion the computer maintains a stack and in iterative version you will have to manually maintain the stack.
Think over it, just convert a depth first search(on graphs) recursive program into a dfs iterative program.
All the best!

TLDR
You can compare the source code below, before and after to intuitively understand the approach without reading this whole answer.
I ran into issues with some multi-key quicksort code I was using to process very large blocks of text to produce suffix arrays. The code would abort due to the extreme depth of recursion required. With this approach, the termination issues were resolved. After conversion the maximum number of frames required for some jobs could be captured, which was between 10K and 100K, taking from 1M to 6M memory. Not an optimum solution, there are more effective ways to produce suffix arrays. But anyway, here's the approach used.
The approach
A general way to convert a recursive function to an iterative solution that will apply to any case is to mimic the process natively compiled code uses during a function call and the return from the call.
Taking an example that requires a somewhat involved approach, we have the multi-key quicksort algorithm. This function has three successive recursive calls, and after each call, execution begins at the next line.
The state of the function is captured in the stack frame, which is pushed onto the execution stack. When sort() is called from within itself and returns, the stack frame present at the time of the call is restored. In that way all the variables have the same values as they did before the call - unless they were modified by the call.
Recursive function
def sort(a: list_view, d: int):
if len(a) <= 1:
return
p = pivot(a, d)
i, j = partition(a, d, p)
sort(a[0:i], d)
sort(a[i:j], d + 1)
sort(a[j:len(a)], d)
Taking this model, and mimicking it, a list is set up to act as the stack. In this example tuples are used to mimic frames. If this were encoded in C, structs could be used. The data can be contained within a data structure instead of just pushing one value at a time.
Reimplemented as "iterative"
# Assume `a` is view-like object where slices reference
# the same internal list of strings.
def sort(a: list_view):
stack = []
stack.append((LEFT, a, 0)) # Initial frame.
while len(stack) > 0:
frame = stack.pop()
if len(frame[1]) <= 1: # Guard.
continue
stage = frame[0] # Where to jump to.
if stage == LEFT:
_, a, d = frame # a - array/list, d - depth.
p = pivot(a, d)
i, j = partition(a, d, p)
stack.append((MID, a, i, j, d)) # Where to go after "return".
stack.append((LEFT, a[0:i], d)) # Simulate function call.
elif stage == MID: # Picking up here after "call"
_, a, i, j, d = frame # State before "call" restored.
stack.append((RIGHT, a, i, j, d)) # Set up for next "return".
stack.append((LEFT, a[i:j], d + 1)) # Split list and "recurse".
elif stage == RIGHT:
_, a, _, j, d = frame
stack.append((LEFT, a[j:len(a)], d)
else:
pass
When a function call is made, information on where to begin execution after the function returns is included in the stack frame. In this example, if/elif/else blocks represent the points where execution begins after return from a call. In C this could be implemented as a switch statement.
In the example, the blocks are given labels; they're arbitrarily labeled by how the list is partitioned within each block. The first block, "LEFT" splits the list on the left side. The "MID" section represents the block that splits the list in the middle, etc.
With this approach, mimicking a call takes two steps. First a frame is pushed onto the stack that will cause execution to resume in the block following the current one after the "call" "returns". A value in the frame indicates which if/elif/else section to fall into on the loop that follows the "call".
Then the "call" frame is pushed onto the stack. This sends execution to the first, "LEFT", block in most cases for this specific example. This is where the actual sorting is done regardless which section of the list was split to get there.
Before the looping begins, the primary frame pushed at the top of the function represents the initial call. Then on each iteration, a frame is popped. The "LEFT/MID/RIGHT" value/label from the frame is used to fall into the correct block of the if/elif/else statement. The frame is used to restore the state of the variables needed for the current operation, then on the next iteration the return frame is popped, sending execution to the subsequent section.
Return values
If the recursive function returns a value used by itself, it can be treated the same way as other variables. Just create a field in the stack frame for it. If a "callee" is returning a value, it checks the stack to see if it has any entries; and if so, updates the return value in the frame on the top of the stack. For an example of this you can check this other example of this same approach to recursive to iterative conversion.
Conclusion
Methods like this that convert recursive functions to iterative functions, are essentially also "recursive". Instead of the process stack being utilized for actual function calls, another programmatically implemented stack takes its place.
What is gained? Perhaps some marginal improvements in speed. Or it could serve as a way to get around stack limitations imposed by some compilers and/or execution environments (stack pointer hitting the guard page). In some cases, the amount of data pushed onto the stack can be reduced. Do the gains offset the complexity introduced in the code by mimicking something that we get automatically with the recursive implementation?
In the case of the sorting algorithm, finding a way to implement this particular one without a stack could be challenging, plus there are so many iterative sorting algorithms available that are much faster. It's been said that any recursive algorithm can be implemented iteratively. Sure... but some algorithms don't convert well without being modified to such a degree that they're no longer the same algorithm.
It may not be such a great idea to convert recursive algorithms just for the sake of converting them. Anyway, for what it's worth, the above approach is a generic way of converting that should apply to just about anything.
If you find you really need an iterative version of a recursive function that doesn't use a memory eating stack of its own, the best approach may be to scrap the code and write your own using the description from a scholarly article, or work it out on paper and then code it from scratch, or other ground up approach.

There is a general way of converting recursive traversal to iterator by using a lazy iterator which concatenates multiple iterator suppliers (lambda expression which returns an iterator). See my Converting Recursive Traversal to Iterator.

Another simple and complete example of turning the recursive function into iterative one using the stack.
#include <iostream>
#include <stack>
using namespace std;
int GCD(int a, int b) { return b == 0 ? a : GCD(b, a % b); }
struct Par
{
int a, b;
Par() : Par(0, 0) {}
Par(int _a, int _b) : a(_a), b(_b) {}
};
int GCDIter(int a, int b)
{
stack<Par> rcstack;
if (b == 0)
return a;
rcstack.push(Par(b, a % b));
Par p;
while (!rcstack.empty())
{
p = rcstack.top();
rcstack.pop();
if (p.b == 0)
continue;
rcstack.push(Par(p.b, p.a % p.b));
}
return p.a;
}
int main()
{
//cout << GCD(24, 36) << endl;
cout << GCDIter(81, 36) << endl;
cin.get();
return 0;
}

My examples are in Clojure, but should be fairly easy to translate to any language.
Given this function that StackOverflows for large values of n:
(defn factorial [n]
(if (< n 2)
1
(*' n (factorial (dec n)))))
we can define a version that uses its own stack in the following manner:
(defn factorial [n]
(loop [n n
stack []]
(if (< n 2)
(return 1 stack)
;; else loop with new values
(recur (dec n)
;; push function onto stack
(cons (fn [n-1!]
(*' n n-1!))
stack)))))
where return is defined as:
(defn return
[v stack]
(reduce (fn [acc f]
(f acc))
v
stack))
This works for more complex functions too, for example the ackermann function:
(defn ackermann [m n]
(cond
(zero? m)
(inc n)
(zero? n)
(recur (dec m) 1)
:else
(recur (dec m)
(ackermann m (dec n)))))
can be transformed into:
(defn ackermann [m n]
(loop [m m
n n
stack []]
(cond
(zero? m)
(return (inc n) stack)
(zero? n)
(recur (dec m) 1 stack)
:else
(recur m
(dec n)
(cons #(ackermann (dec m) %)
stack)))))

A rough description of how a system takes any recursive function and executes it using a stack:
This intended to show the idea without details. Consider this function that would print out nodes of a graph:
function show(node)
0. if isleaf(node):
1. print node.name
2. else:
3. show(node.left)
4. show(node)
5. show(node.right)
For example graph:
A->B
A->C
show(A) would print B, A, C
Function calls mean save the local state and the continuation point so you can come back, and then jump the the function you want to call.
For example, suppose show(A) begins to run. The function call on line 3. show(B) means
- Add item to the stack meaning "you'll need to continue at line 2 with local variable state node=A"
- Goto line 0 with node=B.
To execute code, the system runs through the instructions. When a function call is encountered, the system pushes information it needs to come back to where it was, runs the function code, and when the function completes, pops the information about where it needs to go to continue.

This link provides some explanation and proposes the idea of keeping "location" to be able to get to the exact place between several recursive calls:
However, all these examples describe scenarios in which a recursive call is made a fixed amount of times. Things get trickier when you have something like:
function rec(...) {
for/while loop {
var x = rec(...)
// make a side effect involving return value x
}
}

This is an old question but I want to add a different aspect as a solution. I'm currently working on a project in which I used the flood fill algorithm using C#. Normally, I implemented this algorithm with recursion at first, but obviously, it caused a stack overflow. After that, I changed the method from recursion to iteration. Yes, It worked and I was no longer getting the stack overflow error. But this time, since I applied the flood fill method to very large structures, the program was going into an infinite loop. For this reason, it occurred to me that the function may have re-entered the places it had already visited. As a definitive solution to this, I decided to use a dictionary for visited points. If that node(x,y) has already been added to the stack structure for the first time, that node(x,y) will be saved in the dictionary as the key. Even if the same node is tried to be added again later, it won't be added to the stack structure because the node is already in the dictionary. Let's see on pseudo-code:
startNode = pos(x,y)
Stack stack = new Stack();
Dictionary visited<pos, bool> = new Dictionary();
stack.Push(startNode);
while(stack.count != 0){
currentNode = stack.Pop();
if "check currentNode if not available"
continue;
if "check if already handled"
continue;
else if "run if it must be wanted thing should be handled"
// make something with pos currentNode.X and currentNode.X
// then add its neighbor nodes to the stack to iterate
// but at first check if it has already been visited.
if(!visited.Contains(pos(x-1,y)))
visited[pos(x-1,y)] = true;
stack.Push(pos(x-1,y));
if(!visited.Contains(pos(x+1,y)))
...
if(!visited.Contains(pos(x,y+1)))
...
if(!visited.Contains(pos(x,y-1)))
...
}

Related

find nth element from the last in a list using recursion

we can solve this problem in, say C, using a static variable, like in the snippet below (like the function found in this page).
static int i = 0;
if(head == NULL)
return;
getNthFromLast(head->next, n);
if(++i == n)
{THIS IS THE NTH ELEM FROM LAST
DO STUFF WITH IT.
}
I'm trying to see if I can solve this using a tail call recursion,
and NO static variables/global variables.
I'm trying to learn Haskell was wondering how to implement this in a purely functional way, with out using Haskell's length and
!! something like x !! ((length x) - K).
So started by asking, how we can do it in C, with recursion and NO static/global variables, just to get some idea.
Any suggestions/pointers would be appreciated.
Thanks.
The linked page explains how to solve the problem with a two-finger solution; it's a bit surprising that they don't simply write the recursive version, which would be simple and clear. (In my experience, you don't succeed at interviews by providing tricky and obscure code when there is a simple and clear version which is equally efficient. But I suppose there are interviewers who value obscure code.)
So, the two-finger solution is based on the observation that if we have two pointers into the list (the two fingers), and they are always n elements apart because we always advance them in tandem, then when the leading finger reaches the end of the list, the trailing finger will point at the element we want. That's an easy tail recursion:
Node* tandem_advance(Node* leading, Node* trailing) {
return leading ? tandem_advance(leading->next, trailing->next)
: trailing;
}
For the initial case, we need the leading finger to be N elements from the beginning of the list. Another simple tail recursion:
Node* advance_n(Node* head, int n) {
return n ? advance_n(head->next, n - 1)
: head;
}
Then we just need to put the two together:
Node* nth_from_end(Node* head, int n) {
return tandem_advance(advance_n(head, n + 1), head);
}
(We initially advance by n+1 so that the 0th node from the end will be the last node. I didn't check the desired semantics; it might be that n would be correct instead.)
In Haskell, the two-finger solution seems to be the obvious way. This version will go wrong in various ways if the requested element isn't present. I'll leave it as an exercise for you to fix that (hint: write versions of drop and last that return Maybe values, and then string the computations together with >>=). Note that this takes the last element of the list to be 0th from the end.
nthFromLast :: Int -> [a] -> a
nthFromLast n xs = last $ zipWith const xs (drop n xs)
If you want to do some of the recursion by hand, which No_signal indicates gives better performance,
-- The a and b types are different to make it clear
-- which list we get the value from.
lzc :: [a] -> [b] -> a
lzc [] _ = error "Empty list"
lzc (x : _xs) [] = x
lzc (_x : xs) (_y : ys) = lzc xs ys
nthFromLast n xs = lzc xs (drop n xs)
We don't bother writing out drop by hand because the rather simple-minded version in the library is about the best possible. Unlike either the first solution in this answer or the "reverse, then index" approach, the implementation using lzc only needs to allocate a constant amount of memory.
I assume your code is missing
getNthFromLast(list *node_ptr, int n) {
right at the top. (!!)
Your recursive version keeps track of node_ptrs in its call stack frames, so it is essentially non tail-recursive. Moreover, it continues to unwind the stack (go back up the stack of call frames), while incrementing the i and still checking its equality to n, even after it had found its goal nth node from the last; so it is not efficient.
That would be an iterative version, indeed encodable as tail-recursive, that does things on the way forward, and so can stop immediately after reaching its target. For that, we open up the n-length gap from the start, not after the end is reached. Instead of counting backwards as your recursive version does, we count forward. This is the same two-pointers approach already mentioned here.
In pseudocode,
end = head;
repeat n: (end = end->next);
return tailrec(head,end)->payload;
where
tailrec(p,q) {
if(NULL==q): return p;
else: tailrec(p->next, q->next);
}
This is 1-based, assuming n <= length(head). Haskell code is already given in another answer.
This technique is known as tail recursion modulo cons, or here, modulo payload-access.
nthFromLast lst n = reverse lst !! n
Because Haskell is lazy, this should be sufficiently efficient
If you don't want to use !!, you'll have to redefine it yourself, but this is silly.
The typical iterative strategy uses two pointers and runs one to n - 1 before starting to move the other.
With recursion, we can instead use the stack to count back up from the end of the list by adding a third argument. And to keep the usage clean, we can make a static helper function (in this sense it means only visible within compilation unit, a totally different concept to a static variable with function scope).
static node *nth_last_helper(node* curr, unsigned int n, unsigned int *f) {
node *t;
if (!curr) {
*f = 1;
return NULL;
}
t = nth_last_helper(curr->next, n, f);
if (n == (*f)++) return curr;
return t;
}
node* nth_last(node* curr, unsigned int n) {
unsigned int finder = 0;
return nth_last_helper(curr, n, &finder);
}
Alternatively we could actually do the counting without the extra parameter, but I think this is less clear.
static node *nth_last_helper(node* curr, unsigned int *n) {
node *t;
if (!curr) return NULL;
t = nth_last_helper(curr->next, n);
if (t) return t;
if (1 == (*n)--) return curr;
return NULL;
}
node* nth_last(node* curr, unsigned int n) {
return nth_last_helper(curr, &n);
}
Note that I have used unsigned integers to avoid choosing semantics for the "negative nth last" value in a list.
However, neither of these are tail recursive. To achieve that, you can more directly convert the iterative solution into a recursive one, as in rici's answer.

What is the difference between recursion and tail-recursion? [duplicate]

Whilst starting to learn lisp, I've come across the term tail-recursive. What does it mean exactly?
Consider a simple function that adds the first N natural numbers. (e.g. sum(5) = 0 + 1 + 2 + 3 + 4 + 5 = 15).
Here is a simple JavaScript implementation that uses recursion:
function recsum(x) {
if (x === 0) {
return 0;
} else {
return x + recsum(x - 1);
}
}
If you called recsum(5), this is what the JavaScript interpreter would evaluate:
recsum(5)
5 + recsum(4)
5 + (4 + recsum(3))
5 + (4 + (3 + recsum(2)))
5 + (4 + (3 + (2 + recsum(1))))
5 + (4 + (3 + (2 + (1 + recsum(0)))))
5 + (4 + (3 + (2 + (1 + 0))))
5 + (4 + (3 + (2 + 1)))
5 + (4 + (3 + 3))
5 + (4 + 6)
5 + 10
15
Note how every recursive call has to complete before the JavaScript interpreter begins to actually do the work of calculating the sum.
Here's a tail-recursive version of the same function:
function tailrecsum(x, running_total = 0) {
if (x === 0) {
return running_total;
} else {
return tailrecsum(x - 1, running_total + x);
}
}
Here's the sequence of events that would occur if you called tailrecsum(5), (which would effectively be tailrecsum(5, 0), because of the default second argument).
tailrecsum(5, 0)
tailrecsum(4, 5)
tailrecsum(3, 9)
tailrecsum(2, 12)
tailrecsum(1, 14)
tailrecsum(0, 15)
15
In the tail-recursive case, with each evaluation of the recursive call, the running_total is updated.
Note: The original answer used examples from Python. These have been changed to JavaScript, since Python interpreters don't support tail call optimization. However, while tail call optimization is part of the ECMAScript 2015 spec, most JavaScript interpreters don't support it.
In traditional recursion, the typical model is that you perform your recursive calls first, and then you take the return value of the recursive call and calculate the result. In this manner, you don't get the result of your calculation until you have returned from every recursive call.
In tail recursion, you perform your calculations first, and then you execute the recursive call, passing the results of your current step to the next recursive step. This results in the last statement being in the form of (return (recursive-function params)). Basically, the return value of any given recursive step is the same as the return value of the next recursive call.
The consequence of this is that once you are ready to perform your next recursive step, you don't need the current stack frame any more. This allows for some optimization. In fact, with an appropriately written compiler, you should never have a stack overflow snicker with a tail recursive call. Simply reuse the current stack frame for the next recursive step. I'm pretty sure Lisp does this.
An important point is that tail recursion is essentially equivalent to looping. It's not just a matter of compiler optimization, but a fundamental fact about expressiveness. This goes both ways: you can take any loop of the form
while(E) { S }; return Q
where E and Q are expressions and S is a sequence of statements, and turn it into a tail recursive function
f() = if E then { S; return f() } else { return Q }
Of course, E, S, and Q have to be defined to compute some interesting value over some variables. For example, the looping function
sum(n) {
int i = 1, k = 0;
while( i <= n ) {
k += i;
++i;
}
return k;
}
is equivalent to the tail-recursive function(s)
sum_aux(n,i,k) {
if( i <= n ) {
return sum_aux(n,i+1,k+i);
} else {
return k;
}
}
sum(n) {
return sum_aux(n,1,0);
}
(This "wrapping" of the tail-recursive function with a function with fewer parameters is a common functional idiom.)
This excerpt from the book Programming in Lua shows how to make a proper tail recursion (in Lua, but should apply to Lisp too) and why it's better.
A tail call [tail recursion] is a kind of goto dressed
as a call. A tail call happens when a
function calls another as its last
action, so it has nothing else to do.
For instance, in the following code,
the call to g is a tail call:
function f (x)
return g(x)
end
After f calls g, it has nothing else
to do. In such situations, the program
does not need to return to the calling
function when the called function
ends. Therefore, after the tail call,
the program does not need to keep any
information about the calling function
in the stack. ...
Because a proper tail call uses no
stack space, there is no limit on the
number of "nested" tail calls that a
program can make. For instance, we can
call the following function with any
number as argument; it will never
overflow the stack:
function foo (n)
if n > 0 then return foo(n - 1) end
end
... As I said earlier, a tail call is a
kind of goto. As such, a quite useful
application of proper tail calls in
Lua is for programming state machines.
Such applications can represent each
state by a function; to change state
is to go to (or to call) a specific
function. As an example, let us
consider a simple maze game. The maze
has several rooms, each with up to
four doors: north, south, east, and
west. At each step, the user enters a
movement direction. If there is a door
in that direction, the user goes to
the corresponding room; otherwise, the
program prints a warning. The goal is
to go from an initial room to a final
room.
This game is a typical state machine,
where the current room is the state.
We can implement such maze with one
function for each room. We use tail
calls to move from one room to
another. A small maze with four rooms
could look like this:
function room1 ()
local move = io.read()
if move == "south" then return room3()
elseif move == "east" then return room2()
else print("invalid move")
return room1() -- stay in the same room
end
end
function room2 ()
local move = io.read()
if move == "south" then return room4()
elseif move == "west" then return room1()
else print("invalid move")
return room2()
end
end
function room3 ()
local move = io.read()
if move == "north" then return room1()
elseif move == "east" then return room4()
else print("invalid move")
return room3()
end
end
function room4 ()
print("congratulations!")
end
So you see, when you make a recursive call like:
function x(n)
if n==0 then return 0
n= n-2
return x(n) + 1
end
This is not tail recursive because you still have things to do (add 1) in that function after the recursive call is made. If you input a very high number it will probably cause a stack overflow.
Using regular recursion, each recursive call pushes another entry onto the call stack. When the recursion is completed, the app then has to pop each entry off all the way back down.
With tail recursion, depending on language the compiler may be able to collapse the stack down to one entry, so you save stack space...A large recursive query can actually cause a stack overflow.
Basically Tail recursions are able to be optimized into iteration.
The jargon file has this to say about the definition of tail recursion:
tail recursion /n./
If you aren't sick of it already, see tail recursion.
Instead of explaining it with words, here's an example. This is a Scheme version of the factorial function:
(define (factorial x)
(if (= x 0) 1
(* x (factorial (- x 1)))))
Here is a version of factorial that is tail-recursive:
(define factorial
(letrec ((fact (lambda (x accum)
(if (= x 0) accum
(fact (- x 1) (* accum x))))))
(lambda (x)
(fact x 1))))
You will notice in the first version that the recursive call to fact is fed into the multiplication expression, and therefore the state has to be saved on the stack when making the recursive call. In the tail-recursive version there is no other S-expression waiting for the value of the recursive call, and since there is no further work to do, the state doesn't have to be saved on the stack. As a rule, Scheme tail-recursive functions use constant stack space.
Tail recursion refers to the recursive call being last in the last logic instruction in the recursive algorithm.
Typically in recursion, you have a base-case which is what stops the recursive calls and begins popping the call stack. To use a classic example, though more C-ish than Lisp, the factorial function illustrates tail recursion. The recursive call occurs after checking the base-case condition.
factorial(x, fac=1) {
if (x == 1)
return fac;
else
return factorial(x-1, x*fac);
}
The initial call to factorial would be factorial(n) where fac=1 (default value) and n is the number for which the factorial is to be calculated.
It means that rather than needing to push the instruction pointer on the stack, you can simply jump to the top of a recursive function and continue execution. This allows for functions to recurse indefinitely without overflowing the stack.
I wrote a blog post on the subject, which has graphical examples of what the stack frames look like.
The best way for me to understand tail call recursion is a special case of recursion where the last call(or the tail call) is the function itself.
Comparing the examples provided in Python:
def recsum(x):
if x == 1:
return x
else:
return x + recsum(x - 1)
^RECURSION
def tailrecsum(x, running_total=0):
if x == 0:
return running_total
else:
return tailrecsum(x - 1, running_total + x)
^TAIL RECURSION
As you can see in the general recursive version, the final call in the code block is x + recsum(x - 1). So after calling the recsum method, there is another operation which is x + ...
However, in the tail recursive version, the final call(or the tail call) in the code block is tailrecsum(x - 1, running_total + x) which means the last call is made to the method itself and no operation after that.
This point is important because tail recursion as seen here is not making the memory grow because when the underlying VM sees a function calling itself in a tail position (the last expression to be evaluated in a function), it eliminates the current stack frame, which is known as Tail Call Optimization(TCO).
EDIT
NB. Do bear in mind that the example above is written in Python whose runtime does not support TCO. This is just an example to explain the point. TCO is supported in languages like Scheme, Haskell etc
Here is a quick code snippet comparing two functions. The first is traditional recursion for finding the factorial of a given number. The second uses tail recursion.
Very simple and intuitive to understand.
An easy way to tell if a recursive function is a tail recursive is if it returns a concrete value in the base case. Meaning that it doesn't return 1 or true or anything like that. It will more than likely return some variant of one of the method parameters.
Another way is to tell is if the recursive call is free of any addition, arithmetic, modification, etc... Meaning its nothing but a pure recursive call.
public static int factorial(int mynumber) {
if (mynumber == 1) {
return 1;
} else {
return mynumber * factorial(--mynumber);
}
}
public static int tail_factorial(int mynumber, int sofar) {
if (mynumber == 1) {
return sofar;
} else {
return tail_factorial(--mynumber, sofar * mynumber);
}
}
The recursive function is a function which calls by itself
It allows programmers to write efficient programs using a minimal amount of code.
The downside is that they can cause infinite loops and other unexpected results if not written properly.
I will explain both Simple Recursive function and Tail Recursive function
In order to write a Simple recursive function
The first point to consider is when should you decide on coming out
of the loop which is the if loop
The second is what process to do if we are our own function
From the given example:
public static int fact(int n){
if(n <=1)
return 1;
else
return n * fact(n-1);
}
From the above example
if(n <=1)
return 1;
Is the deciding factor when to exit the loop
else
return n * fact(n-1);
Is the actual processing to be done
Let me the break the task one by one for easy understanding.
Let us see what happens internally if I run fact(4)
Substituting n=4
public static int fact(4){
if(4 <=1)
return 1;
else
return 4 * fact(4-1);
}
If loop fails so it goes to else loop
so it returns 4 * fact(3)
In stack memory, we have 4 * fact(3)
Substituting n=3
public static int fact(3){
if(3 <=1)
return 1;
else
return 3 * fact(3-1);
}
If loop fails so it goes to else loop
so it returns 3 * fact(2)
Remember we called ```4 * fact(3)``
The output for fact(3) = 3 * fact(2)
So far the stack has 4 * fact(3) = 4 * 3 * fact(2)
In stack memory, we have 4 * 3 * fact(2)
Substituting n=2
public static int fact(2){
if(2 <=1)
return 1;
else
return 2 * fact(2-1);
}
If loop fails so it goes to else loop
so it returns 2 * fact(1)
Remember we called 4 * 3 * fact(2)
The output for fact(2) = 2 * fact(1)
So far the stack has 4 * 3 * fact(2) = 4 * 3 * 2 * fact(1)
In stack memory, we have 4 * 3 * 2 * fact(1)
Substituting n=1
public static int fact(1){
if(1 <=1)
return 1;
else
return 1 * fact(1-1);
}
If loop is true
so it returns 1
Remember we called 4 * 3 * 2 * fact(1)
The output for fact(1) = 1
So far the stack has 4 * 3 * 2 * fact(1) = 4 * 3 * 2 * 1
Finally, the result of fact(4) = 4 * 3 * 2 * 1 = 24
The Tail Recursion would be
public static int fact(x, running_total=1) {
if (x==1) {
return running_total;
} else {
return fact(x-1, running_total*x);
}
}
Substituting n=4
public static int fact(4, running_total=1) {
if (x==1) {
return running_total;
} else {
return fact(4-1, running_total*4);
}
}
If loop fails so it goes to else loop
so it returns fact(3, 4)
In stack memory, we have fact(3, 4)
Substituting n=3
public static int fact(3, running_total=4) {
if (x==1) {
return running_total;
} else {
return fact(3-1, 4*3);
}
}
If loop fails so it goes to else loop
so it returns fact(2, 12)
In stack memory, we have fact(2, 12)
Substituting n=2
public static int fact(2, running_total=12) {
if (x==1) {
return running_total;
} else {
return fact(2-1, 12*2);
}
}
If loop fails so it goes to else loop
so it returns fact(1, 24)
In stack memory, we have fact(1, 24)
Substituting n=1
public static int fact(1, running_total=24) {
if (x==1) {
return running_total;
} else {
return fact(1-1, 24*1);
}
}
If loop is true
so it returns running_total
The output for running_total = 24
Finally, the result of fact(4,1) = 24
In Java, here's a possible tail recursive implementation of the Fibonacci function:
public int tailRecursive(final int n) {
if (n <= 2)
return 1;
return tailRecursiveAux(n, 1, 1);
}
private int tailRecursiveAux(int n, int iter, int acc) {
if (iter == n)
return acc;
return tailRecursiveAux(n, ++iter, acc + iter);
}
Contrast this with the standard recursive implementation:
public int recursive(final int n) {
if (n <= 2)
return 1;
return recursive(n - 1) + recursive(n - 2);
}
I'm not a Lisp programmer, but I think this will help.
Basically it's a style of programming such that the recursive call is the last thing you do.
Here is a Common Lisp example that does factorials using tail-recursion. Due to the stack-less nature, one could perform insanely large factorial computations ...
(defun ! (n &optional (product 1))
(if (zerop n) product
(! (1- n) (* product n))))
And then for fun you could try (format nil "~R" (! 25))
A tail recursive function is a recursive function where the last operation it does before returning is make the recursive function call. That is, the return value of the recursive function call is immediately returned. For example, your code would look like this:
def recursiveFunction(some_params):
# some code here
return recursiveFunction(some_args)
# no code after the return statement
Compilers and interpreters that implement tail call optimization or tail call elimination can optimize recursive code to prevent stack overflows. If your compiler or interpreter doesn't implement tail call optimization (such as the CPython interpreter) there is no additional benefit to writing your code this way.
For example, this is a standard recursive factorial function in Python:
def factorial(number):
if number == 1:
# BASE CASE
return 1
else:
# RECURSIVE CASE
# Note that `number *` happens *after* the recursive call.
# This means that this is *not* tail call recursion.
return number * factorial(number - 1)
And this is a tail call recursive version of the factorial function:
def factorial(number, accumulator=1):
if number == 0:
# BASE CASE
return accumulator
else:
# RECURSIVE CASE
# There's no code after the recursive call.
# This is tail call recursion:
return factorial(number - 1, number * accumulator)
print(factorial(5))
(Note that even though this is Python code, the CPython interpreter doesn't do tail call optimization, so arranging your code like this confers no runtime benefit.)
You may have to make your code a bit more unreadable to make use of tail call optimization, as shown in the factorial example. (For example, the base case is now a bit unintuitive, and the accumulator parameter is effectively used as a sort of global variable.)
But the benefit of tail call optimization is that it prevents stack overflow errors. (I'll note that you can get this same benefit by using an iterative algorithm instead of a recursive one.)
Stack overflows are caused when the call stack has had too many frame objects pushed onto. A frame object is pushed onto the call stack when a function is called, and popped off the call stack when the function returns. Frame objects contain info such as local variables and what line of code to return to when the function returns.
If your recursive function makes too many recursive calls without returning, the call stack can exceed its frame object limit. (The number varies by platform; in Python it is 1000 frame objects by default.) This causes a stack overflow error. (Hey, that's where the name of this website comes from!)
However, if the last thing your recursive function does is make the recursive call and return its return value, then there's no reason it needs to keep the current frame object needs to stay on the call stack. After all, if there's no code after the recursive function call, there's no reason to hang on to the current frame object's local variables. So we can get rid of the current frame object immediately rather than keep it on the call stack. The end result of this is that your call stack doesn't grow in size, and thus cannot stack overflow.
A compiler or interpreter must have tail call optimization as a feature for it to be able to recognize when tail call optimization can be applied. Even then, you may have rearrange the code in your recursive function to make use of tail call optimization, and it's up to you if this potential decrease in readability is worth the optimization.
In short, a tail recursion has the recursive call as the last statement in the function so that it doesn't have to wait for the recursive call.
So this is a tail recursion i.e. N(x - 1, p * x) is the last statement in the function where the compiler is clever to figure out that it can be optimised to a for-loop (factorial). The second parameter p carries the intermediate product value.
function N(x, p) {
return x == 1 ? p : N(x - 1, p * x);
}
This is the non-tail-recursive way of writing the above factorial function (although some C++ compilers may be able to optimise it anyway).
function N(x) {
return x == 1 ? 1 : x * N(x - 1);
}
but this is not:
function F(x) {
if (x == 1) return 0;
if (x == 2) return 1;
return F(x - 1) + F(x - 2);
}
I did write a long post titled "Understanding Tail Recursion – Visual Studio C++ – Assembly View"
A tail recursion is a recursive function where the function calls
itself at the end ("tail") of the function in which no computation is
done after the return of recursive call. Many compilers optimize to
change a recursive call to a tail recursive or an iterative call.
Consider the problem of computing factorial of a number.
A straightforward approach would be:
factorial(n):
if n==0 then 1
else n*factorial(n-1)
Suppose you call factorial(4). The recursion tree would be:
factorial(4)
/ \
4 factorial(3)
/ \
3 factorial(2)
/ \
2 factorial(1)
/ \
1 factorial(0)
\
1
The maximum recursion depth in the above case is O(n).
However, consider the following example:
factAux(m,n):
if n==0 then m;
else factAux(m*n,n-1);
factTail(n):
return factAux(1,n);
Recursion tree for factTail(4) would be:
factTail(4)
|
factAux(1,4)
|
factAux(4,3)
|
factAux(12,2)
|
factAux(24,1)
|
factAux(24,0)
|
24
Here also, maximum recursion depth is O(n) but none of the calls adds any extra variable to the stack. Hence the compiler can do away with a stack.
here is a Perl 5 version of the tailrecsum function mentioned earlier.
sub tail_rec_sum($;$){
my( $x,$running_total ) = (#_,0);
return $running_total unless $x;
#_ = ($x-1,$running_total+$x);
goto &tail_rec_sum; # throw away current stack frame
}
This is an excerpt from Structure and Interpretation of Computer Programs about tail recursion.
In contrasting iteration and recursion, we must be careful not to
confuse the notion of a recursive process with the notion of a
recursive procedure. When we describe a procedure as recursive, we are
referring to the syntactic fact that the procedure definition refers
(either directly or indirectly) to the procedure itself. But when we
describe a process as following a pattern that is, say, linearly
recursive, we are speaking about how the process evolves, not about
the syntax of how a procedure is written. It may seem disturbing that
we refer to a recursive procedure such as fact-iter as generating an
iterative process. However, the process really is iterative: Its state
is captured completely by its three state variables, and an
interpreter need keep track of only three variables in order to
execute the process.
One reason that the distinction between process and procedure may be
confusing is that most implementations of common languages (including Ada, Pascal, and
C) are designed in such a way that the interpretation of any recursive
procedure consumes an amount of memory that grows with the number of
procedure calls, even when the process described is, in principle,
iterative. As a consequence, these languages can describe iterative
processes only by resorting to special-purpose “looping constructs”
such as do, repeat, until, for, and while. The implementation of
Scheme does not share this defect. It
will execute an iterative process in constant space, even if the
iterative process is described by a recursive procedure. An
implementation with this property is called tail-recursive. With a
tail-recursive implementation, iteration can be expressed using the
ordinary procedure call mechanism, so that special iteration
constructs are useful only as syntactic sugar.
Tail recursion is the life you are living right now. You constantly recycle the same stack frame, over and over, because there's no reason or means to return to a "previous" frame. The past is over and done with so it can be discarded. You get one frame, forever moving into the future, until your process inevitably dies.
The analogy breaks down when you consider some processes might utilize additional frames but are still considered tail-recursive if the stack does not grow infinitely.
Tail Recursion is pretty fast as compared to normal recursion.
It is fast because the output of the ancestors call will not be written in stack to keep the track.
But in normal recursion all the ancestor calls output written in stack to keep the track.
To understand some of the core differences between tail-call recursion and non-tail-call recursion we can explore the .NET implementations of these techniques.
Here is an article with some examples in C#, F#, and C++\CLI: Adventures in Tail Recursion in C#, F#, and C++\CLI.
C# does not optimize for tail-call recursion whereas F# does.
The differences of principle involve loops vs. Lambda calculus. C# is designed with loops in mind whereas F# is built from the principles of Lambda calculus. For a very good (and free) book on the principles of Lambda calculus, see Structure and Interpretation of Computer Programs, by Abelson, Sussman, and Sussman.
Regarding tail calls in F#, for a very good introductory article, see Detailed Introduction to Tail Calls in F#. Finally, here is an article that covers the difference between non-tail recursion and tail-call recursion (in F#): Tail-recursion vs. non-tail recursion in F sharp.
If you want to read about some of the design differences of tail-call recursion between C# and F#, see Generating Tail-Call Opcode in C# and F#.
If you care enough to want to know what conditions prevent the C# compiler from performing tail-call optimizations, see this article: JIT CLR tail-call conditions.
Recursion means a function calling itself. For example:
(define (un-ended name)
(un-ended 'me)
(print "How can I get here?"))
Tail-Recursion means the recursion that conclude the function:
(define (un-ended name)
(print "hello")
(un-ended 'me))
See, the last thing un-ended function (procedure, in Scheme jargon) does is to call itself. Another (more useful) example is:
(define (map lst op)
(define (helper done left)
(if (nil? left)
done
(helper (cons (op (car left))
done)
(cdr left))))
(reverse (helper '() lst)))
In the helper procedure, the LAST thing it does if the left is not nil is to call itself (AFTER cons something and cdr something). This is basically how you map a list.
The tail-recursion has a great advantage that the interpreter (or compiler, dependent on the language and vendor) can optimize it, and transform it into something equivalent to a while loop. As matter of fact, in Scheme tradition, most "for" and "while" loop is done in a tail-recursion manner (there is no for and while, as far as I know).
Tail Recursive Function is a recursive function in which recursive call is the last executed thing in the function.
Regular recursive function, we have stack and everytime we invoke a recursive function within that recursive function, adds another layer to our call stack. In normal recursion
space: O(n) tail recursion makes space complexity from
O(N)=>O(1)
Tail call optimization means that it is possible to call a function from another function without growing the call stack.
We should write tail recursion in recursive solutions. but certain languages do not actually support the tail recursion in their engine that compiles the language down. since ecma6, there has been tail recursion that was in the specification. BUt none of the engines that compile js have implemented tail recursion into it. you wont achieve O(1) in js, because the compiler itself does not know how to implement this tail recursion. As of January 1, 2020 Safari is the only browser that supports tail call optimization.
Haskell and Java have tail recursion optimization
Regular Recursive Factorial
function Factorial(x) {
//Base case x<=1
if (x <= 1) {
return 1;
} else {
// x is waiting for the return value of Factorial(x-1)
// the last thing we do is NOT applying the recursive call
// after recursive call we still have to multiply.
return x * Factorial(x - 1);
}
}
we have 4 calls in our call stack.
Factorial(4); // waiting in the memory for Factorial(3)
4 * Factorial(3); // waiting in the memory for Factorial(2)
4 * (3 * Factorial(2)); // waiting in the memory for Factorial(1)
4 * (3 * (2 * Factorial(1)));
4 * (3 * (2 * 1));
We are making 4 Factorial() calls, space is O(n)
this mmight cause Stackoverflow
Tail Recursive Factorial
function tailFactorial(x, totalSoFar = 1) {
//Base Case: x===0. In recursion there must be base case. Otherwise they will never stop
if (x === 0) {
return totalSoFar;
} else {
// there is nothing waiting for tailFactorial to complete. we are returning another instance of tailFactorial()
// we are not doing any additional computaion with what we get back from this recursive call
return tailFactorial(x - 1, totalSoFar * x);
}
}
We dont need to remember anything after we make our recursive call
There are two basic kinds of recursions: head recursion and tail recursion.
In head recursion, a function makes its recursive call and then
performs some more calculations, maybe using the result of the
recursive call, for example.
In a tail recursive function, all calculations happen first and
the recursive call is the last thing that happens.
Taken from this super awesome post.
Please consider reading it.
A function is tail recursive if each recursive case consists only of a call to the function itself, possibly with different arguments. Or, tail recursion is recursion with no pending work. Note that this is a programming-language independent concept.
Consider the function defined as:
g(a, b, n) = a * b^n
A possible tail-recursive formulation is:
g(a, b, n) | n is zero = a
| n is odd = g(a*b, b, n-1)
| otherwise = g(a, b*b, n/2)
If you examine each RHS of g(...) that involves a recursive case, you'll find that the whole body of the RHS is a call to g(...), and only that. This definition is tail recursive.
For comparison, a non-tail-recursive formulation might be:
g'(a, b, n) = a * f(b, n)
f(b, n) | n is zero = 1
| n is odd = f(b, n-1) * b
| otherwise = f(b, n/2) ^ 2
Each recursive case in f(...) has some pending work that needs to happen after the recursive call.
Note that when we went from g' to g, we made essential use of associativity
(and commutativity) of multiplication. This is not an accident, and most cases where you will need to transform recursion to tail-recursion will make use of such properties: if we want to eagerly do some work rather than leave it pending, we have to use something like associativity to prove that the answer will be the same.
Tail recursive calls can be implemented with a backwards jump, as opposed to using a stack for normal recursive calls. Note that detecting a tail call, or emitting a backwards jump is usually straightforward. However, it is often hard to rearrange the arguments such that the backwards jump is possible. Since this optimization is not free, language implementations can choose not to implement this optimization, or require opt-in by marking recursive calls with a 'tailcall' instruction and/or choosing a higher optimization setting.
Some languages (e.g. Scheme) do, however, require all implementations to optimize tail-recursive functions, maybe even all calls in tail position.
Backwards jumps are usually abstracted as a (while) loop in most imperative languages, and tail-recursion, when optimized to a backwards jump, is isomorphic to looping.
This question has a lot of great answers... but I cannot help but chime in with an alternative take on how to define "tail recursion", or at least "proper tail recursion." Namely: should one look at it as a property of a particular expression in a program? Or should one look at it as a property of an implementation of a programming language?
For more on the latter view, there is a classic paper by Will Clinger, "Proper Tail Recursion and Space Efficiency" (PLDI 1998), that defined "proper tail recursion" as a property of a programming language implementation. The definition is constructed to allow one to ignore implementation details (such as whether the call stack is actually represented via the runtime stack or via a heap-allocated linked list of frames).
To accomplish this, it uses asymptotic analysis: not of program execution time as one usually sees, but rather of program space usage. This way, the space usage of a heap-allocated linked list vs a runtime call stack ends up being asymptotically equivalent; so one gets to ignore that programming language implementation detail (a detail which certainly matters quite a bit in practice, but can muddy the waters quite a bit when one attempts to determine whether a given implementation is satisfying the requirement to be "property tail recursive")
The paper is worth careful study for a number of reasons:
It gives an inductive definition of the tail expressions and tail calls of a program. (Such a definition, and why such calls are important, seems to be the subject of most of the other answers given here.)
Here are those definitions, just to provide a flavor of the text:
Definition 1 The tail expressions of a program written in Core Scheme are defined inductively as follows.
The body of a lambda expression is a tail expression
If (if E0 E1 E2) is a tail expression, then both E1 and E2 are tail expressions.
Nothing else is a tail expression.
Definition 2 A tail call is a tail expression that is a procedure call.
(a tail recursive call, or as the paper says, "self-tail call" is a special case of a tail call where the procedure is invoked itself.)
It provides formal definitions for six different "machines" for evaluating Core Scheme, where each machine has the same observable behavior except for the asymptotic space complexity class that each is in.
For example, after giving definitions for machines with respectively, 1. stack-based memory management, 2. garbage collection but no tail calls, 3. garbage collection and tail calls, the paper continues onward with even more advanced storage management strategies, such as 4. "evlis tail recursion", where the environment does not need to be preserved across the evaluation of the last sub-expression argument in a tail call, 5. reducing the environment of a closure to just the free variables of that closure, and 6. so-called "safe-for-space" semantics as defined by Appel and Shao.
In order to prove that the machines actually belong to six distinct space complexity classes, the paper, for each pair of machines under comparison, provides concrete examples of programs that will expose asymptotic space blowup on one machine but not the other.
(Reading over my answer now, I'm not sure if I'm managed to actually capture the crucial points of the Clinger paper. But, alas, I cannot devote more time to developing this answer right now.)
Many people have already explained recursion here. I would like to cite a couple of thoughts about some advantages that recursion gives from the book “Concurrency in .NET, Modern patterns of concurrent and parallel programming” by Riccardo Terrell:
“Functional recursion is the natural way to iterate in FP because it
avoids mutation of state. During each iteration, a new value is passed
into the loop constructor instead to be updated (mutated). In
addition, a recursive function can be composed, making your program
more modular, as well as introducing opportunities to exploit
parallelization."
Here also are some interesting notes from the same book about tail recursion:
Tail-call recursion is a technique that converts a regular recursive
function into an optimized version that can handle large inputs
without any risks and side effects.
NOTE The primary reason for a tail call as an optimization is to
improve data locality, memory usage, and cache usage. By doing a tail
call, the callee uses the same stack space as the caller. This reduces
memory pressure. It marginally improves the cache because the same
memory is reused for subsequent callers and can stay in the cache,
rather than evicting an older cache line to make room for a new cache
line.

How to find the height of a binary tree without using recursion and level order traversal?

I came up with preorder traversal for finding the height of a tree.
void preHeight(node * n)
{
int max = 0,c = 0;
while(1)
{
while(n)
{
push(n);
c++;
if(c>max)
max = c;
n= n->left;
}
if (isStackEmpty())
return max;
n = pop();
if(n->right) //Problem point
c--;
n=n->right;
}
}
I get the height correct but I'm not sure if my method is correct. What I do is I increment my counter c till the leftmost node and then, if I move up I reduce it in case I need to move right, and then repeat the entire exercise. Is that correct?
Consider the following tree -
o
/ \
o o
/ \
o o
You will reach the end of leftmost branch with c==2, then pop the nodes on the way up, but only decrement c for nodes with a right child, when in fact you should decrement it on each pop since it means you went a level up, regardless of other children. In this case you'll reach the top node, decrement once, and then start descending from the root with c==1, eventually reaching 3.
If you remove the condition it should work. Alternatively, you can keep the value of c per each level instead of restoring it with the decrements - you could either push it into a separate stack alongside the node pointers, or you could convert the code to be recursive, with c (and the current node) as local variables (basically this is the same thing, except that the compiler maintains the stack for you).

How can I get a boolean out of a reiteration

I made this binary search using reiteration, however, when I get the answer (boolean), I seem to stumble in my way out of the reiteration and cant get the correct answer out of the function.
Can anybody help? Please comment on the code.
// binary search
bool
search(int value, int array[], int n)
{
// the array has more than 1 item
if (n > 1)
{
int m = (n/2);
// compares the middle point to the value
if (value == array [m])
return true;
// if the value given is lower than the middle point of the array
if (value < array [m])
{
int *new_array = malloc (m * sizeof(int));
// creating a new array with the lower half of the original one
memcpy(new_array, array, m * sizeof(int));
// recalling the function
search (value, new_array, m);
}
// if the value given is greater than the middle point of the array
else
{
int *new_array = malloc (m * sizeof(int));
// creating a new array with the upper half of the original one
memcpy(new_array, array + (m + 1), (m * sizeof(int)));
// recalling the function
search (value, new_array, m);
}
}
else if (n==1)
{
// comparing the one item array with the value
if (array[0] == value)
return true;
else
return false;
}
if (true)
return true;
else
return false;
}
You need to return the value of recursive searches.
return search (value, new_array, m);
Otherwise when you call search you are just throwing away the answer
You should return search(...);, and not only invoke the search() method - otherwise the return value is not bubbled up.
In addition, note that this algorithm is O(n) (linear in the size of the array) and is leaking memory and very inefficient, since you copy half of the array in each iteration.
Actually, It probably makes the algorithm much slower then the naive linear search for an element.
A good binary search does not need to copy half of the array - it just "looks" at half of it. It can be achieved by sending array+m as the array (for the higher half), and only decreasing n is enough for the lower half.
As mentioned by amint copying the array completely defeats the purpose of doing a binary search. Second, I believe you mean recursion, not reiteration.
Things to think about: Instead of copying the array, think about how you could achieve the same result by passing in a set of indexes to the array (like beginning and end).
As to your actual question, how to return the boolean value through the recursion. The thing about returning values out of a recursive function is that each iteration has to pass along the value. You can think of it like a chain of responsibility delegation. The first call is like the head of the company. He doesn't do all of the work, so he delegates it to his assistant but he does one piece of the work first. Then the assistant has an assistant etc. Turtles all the way down ;)
In order to get a value back though, each person in the chain has to give it back to the person before them. Going back to programming, this means that every time you recursively call search, you need to return that value.
Lastly, once you have those things in order you need to get your base case better defined. I assume that's what you're trying to do with
if (true)
return true;
else
return false;
However, as pointed out by H2CO3, this doesn't make much sense. In fact, your previous if statement if (n == 1) ... should make sure that the code after that is never executed.

find element in the middle of a stack

I was asked this question in an interview.The problem was i would be given a stack and have to find the element in the middle position of the stack."top" index is not available (so that you don't pop() top/2 times and return the answer).Assume that you will reach the bottom of the stack when pop() returns -1.Don't use any additional data structure.
Eg:
stack index
-----
2 nth element
3
99
.
1 n/2 th element
.
-1 bottom of the stack(0th index)
Answer: 1 (I didn't mean the median.Find the element in middle position)
Is recursion the only way?
Thanks,
psy
Walk through the stack, calculate the depth and on the way back return the appropriate element.
int middle(stack* s, int n, int* depth) {
if (stack_empty(s)) {
*depth = n;
return 0; //return something, doesn't matter..
}
int val = stack_pop(s);
int res = middle(s, n+1, depth);
stack_push(s, val);
if (n == *depth/2)
return val;
return res;
}
int depth;
middle(&stack, 0, &depth);
Note: yes, recursion is the only way. Not knowing the depth of the stack means you have to store those values somewhere.
Recursion is never the only way ;)
However, recursion provides you with an implied additional stack (i.e. function parameters and local variables), and it does appear that you need some additional storage to store traversed elements, in that case it appears that recursion may be the only way given that constraint.
"... Don't use any additional data structure. ..."
Then the task is unsolvable, because you need some place where to store the popped-out data. You need another stack for recursion, which is also a data structure. It doesn't make sense to prohibit any data structure and allow recursion.
Here is one solution: Take two pointers, advance one of them two steps at a time (fast), the other one only one step at a time (slow). If the fast one reaches the bottom return the slow pointer which points to the middle index. No recursion required.
int * slow = stack;
int * fast = stack;
while(1) {
if(STACK_BOTTOM(fast)) return slow;
fast--;
if(STACK_BOTTOM(fast)) return slow;
slow--;
fast--;
}
Recursion seems to be the only way. If you try to use the fast and slow pointer concept during popping, you will need to store the values somewhere and that violates the requirement of no additional data structure.
This question is tagged with c, so for the c programming language I agree that recursion is the only way. However, if first class anonymous functions are supported, you can solve it without recursion. Some pseudo code (using Haskell's lambda syntax):
n = 0
f = \x -> 0 # constant function that always returns 0
while (not stack_empty) do
x = pop
n = n+1
f = \a -> if (a == n) then x else f(a)
middle = f(n/2) # middle of stack
# stack is empty, rebuilt it up to middle if required
for x in (n .. n/2) do push(f(x))
Please note: during the while loop, there's no (recursive) call of f. f(a) in the else branch is just used to construct a new(!) function, which is called f again.
Assumed the stack has 3 elements 10, 20, 30 (from bottom to top) this basically constructs the lambda
(\a -> if a==1
then 30
else (\b -> if b==2
then 20
else (\c -> if c==3
then 10
else (\d -> 0)(c)
)
(b)
)
(a)
)
or a little bit more readable
f(x) = if x==1 then 30 else (if x==2 then 20 else (if x==3 then 10 else 0))

Resources