Recursion (computer science)
4044867
223228806
2008-07-03T04:04:42Z
Kenyon
152918
/* Recursive programming */ links
'''[[Recursion]] in computer science''' is a way of thinking about and solving problems. It is, in fact, one of the central ideas of computer science. <ref>{{cite book
| last = Epp
| first = Susanna
| title = Discrete Mathematics with Applications
| year=1995
| edition=2nd
| page=427
}}</ref> Solving a problem using recursion means the solution depends on solutions to smaller instances of the same problem. <ref>{{cite book
| last = Graham
| first = Ronald
| coauthors = Donald Knuth, Oren Patashnik
| title = Concrete Mathematics
| year=1990
| pages=Chapter 1: Recurrent Problems
| url=http://www-cs-faculty.stanford.edu/~knuth/gkp.html
}}</ref>
<blockquote>"The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions." <ref>{{cite book
| last = Wirth
| first = Niklaus
| title = Algorithms + Data Structures = Programs
| pages = 126
| publisher=Prentice-Hall
| year =1976
}}</ref>
</blockquote>
Most high-level computer programming languages support recursion by allowing a function to call itself within the program text. [[Imperative languages| Imperative languages]] define looping constructs like “while” and “for” loops that are used to perform repetitive actions. Some [[Functional languages| functional programming languages]] do not define any looping constructs but rely solely on recursion to repeatedly call code. [[Computability theory (computer science)| Computability theory]] has proven that these recursive only languages are mathematically equivalent to the imperative languages, meaning they can solve the same kinds of problems even without the typical control structures like “while” and “for”.<br />[[Image:recursiveTree.JPG|thumb|Tree created using the [[Logo (programming language)|Logo programming language]] and relying heavily on recursion.]]
==Recursive algorithms==
A common method of simplification is to divide a problem into sub-problems of the same type. As a [[computer programming]] technique, this is called [[divide and conquer algorithm|divide and conquer]], and it is key to the design of many important algorithms, as well as being a fundamental part of [[dynamic programming]].
Virtually all [[programming language]]s in use today allow the direct specification of recursive functions and procedures. When such a function is called, the computer (for most languages on most stack-based architectures) or the language implementation keeps track of the various instances of the function (on many architectures, by using a [[call stack]], although other methods may be used). Conversely, every recursive function can be transformed into an iterative function by using a [[stack (data structure)|stack]].
Any function that can be evaluated by a computer can be expressed in terms of recursive functions without the use of [[iteration]],{{Fact|date=May 2008}} in [[continuation-passing style]]; and conversely any recursive function can be expressed in terms of iteration.{{Fact|date=May 2008}}
To make a very literal example out of this: If an unknown word is seen in a book, the reader can make a note of the current page number and put the note on a stack (which is empty so far). The reader can then look the new word up and, while reading on the subject, may find yet another unknown word. The page number of this word is also written down and put on top of the stack. At some point an article is read that does not require any explanation. The reader then returns to the previous page number and continues reading from there. This is repeated, sequentially removing the topmost note from the stack. Finally, the reader returns to the original book. This is a recursive approach.
Some languages designed for [[logic programming]] and [[functional programming]] provide recursion as the only means of repetition directly available to the programmer. Such languages generally make [[tail recursion]] as efficient as iteration, letting programmers express other repetition structures (such as [[Scheme (programming language)|Scheme]]'s <code>map</code> and <code>for</code>) in terms of recursion.
Recursion is deeply embedded in the [[theory of computation]], with the theoretical equivalence of [[μ-recursive function]]s and [[Turing machine]]s at the foundation of ideas about the universality of the modern computer.
==Recursive programming==
Creating a recursive procedure essentially requires defining a "base case", and then defining rules to break down more complex cases into the base case. Key to a recursive procedure is that with each recursive call, the problem domain must be reduced in such a way that eventually the base case is arrived at.
Some authors classify recursion as either "generative" or "structural". The distinction is made based on where the procedure gets the data that it works on. If the data comes from a data structure like a list, then the procedure is "structurally recursive"; otherwise, it is "generatively recursive".<ref>
{{cite book
| last = Felleisen
| first = Matthias
| authorlink =
| coauthors = Robert Bruce Findler, Matthew Flatt, Shriram Krishnamurthi
| title = How to Design Programs: An Introduction to Computing and Programming
| publisher = MIT Press
| date = 2001
| location = Cambridge, MASS
| pages = Part V "Generative Recursion"
| url = http://www.htdp.org/2003-09-26/Book/curriculum-Z-H-31.html
| doi =
| id =
| isbn = }}
</ref>
<blockquote>
Many well-known recursive algorithms generate an entirely new piece of data from the given data and recur on it. HTDP (How To Design Programs) refers to this kind as generative recursion. Examples of generative recursion include: [[greatest common divisor|gcd]], [[quicksort]], [[binary search]], [[mergesort]], [[Newton's method]], [[fractal]]s, and [[adaptive integration]].<ref>
{{Citation
| last = Felleisen
| first = Matthias
| contribution = Developing Interactive Web Programs
| year = 2002
| title = Advanced Functional Programming: 4th International School
| editor-last = Jeuring
| editor-first = Johan
| volume =
| pages = 108
| place = Oxford, UK
| publisher = Springer
| id = }}.</ref>
</blockquote>
=== Examples of recursively defined procedures (generative recursion) ===
==== Factorial ====
A classic example of a recursive procedure is the function used to calculate the [[factorial]] of an [[integer]].
Function definition:
<math> fact(n) =
\begin{cases}
1 & \mbox{if } n = 0 \\
n \times fact(n-1) & \mbox{if } n > 0 \\
\end{cases}
</math>
A [[recurrence relation]] is an equation that relates later terms in the sequence to earlier terms<ref>{{cite book
| last = Epp
| first = Susanna
| title = Discrete Mathematics with Applications
| pages = p424
| publisher=Brooks-Cole Publishing Company
| year =1995
}}</ref>.<br />
Recurrence relation for factorial:<br />
b<sub>n</sub> = n * b<sub>n-1</sub> <br />
b<sub>0</sub> = 1
{| class="wikitable"
|-
! Computing the recurrence relation for n = 4:
|-
|
b<sub>4</sub> = 4 * b<sub>3</sub><br />
= 4 * 3 * b<sub>2</sub>
= 4 * 3 * 2 * b<sub>1</sub>
= 4 * 3 * 2 * 1 * b<sub>0</sub>
= 4 * 3 * 2 * 1 * 1
= 4 * 3 * 2 * 1
= 4 * 3 * 2
= 4 * 6
= 4 * 6
= 24
|}
<br />
Example Implementations:
{| class="wikitable"
|-
! [[Scheme (programming language)]]:
! [[C (programming language)]]:
! [[Pascal (programming language)]]:
|-
|<source lang="scheme">
;; Input: Integer n such that n >= 1
(define (fact n)
(if (= n 1)
1
(* n (fact (- n 1)))))
</source>
|<source lang="c">
//INPUT: n is an Integer such that n >= 1
int fact(int n)
{
if (n == 1)
return 1;
else
return n * fact(n - 1);
}
</source>
|<source lang="pascal">
{INPUT: x is an Integer such that x >= 1}
function Factorial(x: integer): integer;
begin
if x = 1 then
Factorial := 1
else
Factorial := x * Factorial(x-1);
end;
</source>
|}
These factorial procedures can also be described in [[C (programming language)]] and [[Pascal (programming language)]] without using recursion. These procedures make use of the typical looping constructs found in imperative programming languages. [[Scheme (programming language)]], however, is a functional programming language and does not define any looping constructs. It relies solely upon recursion to perform all looping. Because Scheme is tail-recursive, a recursive procedure can be defined that implements the factorial procedure as an iterative process - meaning that it uses constant space but linear time.
{| class="wikitable"
|-
! [[Scheme (programming language)]]:
! [[C (programming language)]]:
! [[Pascal (programming language)]]:
|-
|<source lang="Scheme">
;; Input: Integer n such that n >= 1
(define (fact n)
(fact-iter 1 n))
(define (fact-iter prd n)
(if (= n 1)
prd
(fact-iter (* prd n) (- n 1))))
</source>
|<source lang="C">
//INPUT: n is an Integer such that n > 0
int fact(int n)
{
int prd = 1;
while(n >= 1)
{
prd *= n;
n--;
}
return prd;
}
</source>
|<source lang="pascal">
{INPUT: x is an Integer such that x >= 1}
function Factorial(x: integer): integer;
var
i, tmp: integer;
begin
tmp := 1;
for i := 2 to x do
tmp := tmp * i;
Factorial := tmp
end;
</source>
|}
==== Fibonacci ====
Another well known recursive sequence is the [[Fibonacci number]]s. The first few elements of this sequence are: 0, 1, 1, 2, 3, 5, 8, 13, 21...
Function definition:
<math> fib(n) =
\begin{cases}
0 & \mbox{if } n = 0 \\
1 & \mbox{if } n = 1 \\
fib(n-1) + fib(n-2) & \mbox{if } n >= 2 \\
\end{cases}
</math>
[[Recurrence relation]] for Fibonacci:<br />
b<sub>n</sub> = b<sub>n-1</sub> + b<sub>n-2</sub><br />
b<sub>1</sub> = 1, b<sub>0</sub> = 0
{| class="wikitable"
|-
! Computing the recurrence relation for n = 4:
|-
|
b<sub>4</sub> = b<sub>3</sub> + b<sub>2</sub>
= b<sub>2</sub> + b<sub>1</sub> + b<sub>1</sub> + b<sub>0</sub>
= b<sub>1</sub> + b<sub>0</sub> + 1 + 1 + 0
= 1 + 0 + 1 + 1 + 0
= 3
|}
Example Implementations:
{| class="wikitable"
|-
! [[Scheme (programming language)]]:
! [[C (programming language)]]:
! [[Pascal (programming language)]]:
|-
|<source lang="scheme">
;; n is an integer such that n >= 0
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else
(+ (fib (- n 1))
(fib (- n 2))))))
</source>
|<source lang="c">
//INPUT: n is an integer such that n >= 0
int fib(int n)
{
if (n == 0)
return 0;
else if (n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
</source>
|<source lang="pascal">
{INPUT: x is an Integer such that x >= 0}
function Fib(x: integer): integer;
begin
if x = 0 then
Fib := 0
else if x = 1 then
Fib := 1
else
Fib := Fib(x-1) + Fib(x-2)
end;
</source>
|}
These implementations are especially bad. That is because there are 2 recursive calls. This version of Fibonacci demonstrates, in fact, typical "tree recursion" and grows exponentially in time and linearly in space requirements.<ref>{{cite book
| last = Abelson
| first = Harold
| coauthors = Gerald Jay Sussman
| title = Structure and Interpretation of Computer Programs
| year=1996
| pages=Section 1.2.2
| url=http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2.2
}}</ref>
==== Greatest common divisor ====
Another famous recursive function is the [[Euclidean algorithm]], used to compute the [[greatest common divisor]] of two integers.
Function definition:
<math> gcd(x,y) =
\begin{cases}
x & \mbox{if } y = 0 \\
gcd(y, remainder(x,y)) & \mbox{if } x >= y \mbox{ and } y > 0 \\
\end{cases}
</math><br />
Recurrence relation for greatest common divisor, where "x % y" expresses the remainder of x / y:<br />
gcd(x,y)</sub> = gcd(y, x % y) <br />
gcd(x,0)</sub> = x
{| class="wikitable"
|-
! Computing the recurrence relation for x = 27 and y = 9:
|-
|
gcd(27, 9) = gcd(9, 27 % 9)
= gcd(9, 0)
= 9
|-
! Computing the recurrence relation for x = 259 and y = 111:
|-
|
gcd(259, 111) = gcd(259, 259 % 111)
= gcd(111, 37)
= gcd(37, 0)
= 37
|}
<br />
Example Implementations:
{| class="wikitable"
|-
! [[Scheme (programming language)]]:
! [[C (programming language)]]:
|-
|<source lang="scheme">
;; Input: Integers x, y such that x >= y and y > 0
(define (gcd x y)
(if (= y 0)
x
(gcd y (remainder x y))))
</source>
|<source lang="c">
int gcd(int x, int y)
{
if (y == 0)
return x;
else
return gcd(y, x % y);
}
</source>
|}
Below is the same algorithm using an iterative approach. Note that the Scheme implementation above is in fact already an iterative processes. It does not accumulate a chain of deferred operations, rather its state is maintained entirely in the variables x and y. Its "number of steps grows the as the logarithm of the numbers involved."<ref>{{cite book
| last = Abelson
| first = Harold
| coauthors = Gerald Jay Sussman
| title = Structure and Interpretation of Computer Programs
| year=1996
| pages=Section 1.2.5
| url=http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2.5
}}</ref>
{| class="wikitable"
|-
! [[C (programming language)]]:
|-
|<source lang="c">
int gcd(int x, int y)
{
int r;
while (y != 0) {
r = x % y;
x = y;
y = r;
}
return x;
}
</source>
|}
The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is more difficult to understand the process by simple inspection, although they are very similar in their steps.
==== Towers of Hanoi ====
{{main|Towers of Hanoi}}
For a full discussion of this problem's description, history and solution see the main article or one of the many references.<ref>{{cite book
| last = Graham
| first = Ronald
| coauthors = Donald Knuth, Oren Patashnik
| title = Concrete Mathematics
| year=1990
| pages=Chapter 1, Section 1.1: The Tower of Hanoi
| url=http://www-cs-faculty.stanford.edu/~knuth/gkp.html
}}</ref> <ref>{{cite book
| last = Epp
| first = Susanna
| title = Discrete Mathematics with Applications
| year=1995
| edition=2nd
| pages=427-430: The Tower of Hanoi
}}</ref> Simply put the problem is this: given three pegs, one with a set of N disks of increasing size, determine the minimum (optimal) number of steps it take to move all the disks from their initial position to another peg without placing a larger disk on top of a smaller one.
Function definition:
<math> hanoi(n) =
\begin{cases}
1 & \mbox{if } n = 1 \\
2 * hanoi(n-1) + 1 & \mbox{if } n > 1\\
\end{cases}
</math><br />
Recurrence relation for hanoi:<br />
h<sub>n</sub> = 2*h<sub>n-1</sub>+1<br />
h<sub>1</sub> = 1
{| class="wikitable"
|-
! Computing the recurrence relation for n = 4:
|-
|
hanoi(4) = 2*hanoi(3) + 1
= 2*(2*hanoi(2) + 1) + 1
= 2*(2*(2*hanoi(1) + 1) + 1) + 1
= 2*(2*(2*1 + 1) + 1) + 1
= 2*(2*(3) + 1) + 1
= 2*(7) + 1
= 15
|}
<br />
Example Implementations:
{| class="wikitable"
|-
! [[Scheme (programming language)]]:
! [[C (programming language)]]:
|-
|<source lang="scheme">
;; Input: Integer n such that n >= 1
(define (hanoi n)
(if (= n 1)
1
(+ (* 2 (hanoi (- n 1)))
1)))
</source>
|<source lang="c">
/* Input: Integer n such that n >= 1 */
int hanoi(int n)
{
if (n == 1)
return 1;
else
return 2*hanoi(n-1) + 1;
}
</source>
|}
<br />
Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula. <ref>{{cite book
| last = Epp
| first = Susanna
| title = Discrete Mathematics with Applications
| year=1995
| edition=2nd
| pages=447-448: An Explicit Formula for the Tower of Hanoi Sequence
}}</ref>
{| class="wikitable"
|-
! An explicit formula for Towers of Hanoi:
|-
|
h<sub>1</sub> = 1 = 2<sup>1</sup> - 1
h<sub>2</sub> = 3 = 2<sup>2</sup> - 1
h<sub>3</sub> = 7 = 2<sup>3</sup> - 1
h<sub>4</sub> = 15 = 2<sup>4</sup> - 1
h<sub>5</sub> = 31 = 2<sup>5</sup> - 1
h<sub>6</sub> = 63 = 2<sup>6</sup> - 1
h<sub>7</sub> = 127 = 2<sup>7</sup> - 1
In general:
h<sub>n</sub> = 2<sup>n</sup> - 1, for all n >= 1
|}
==== Binary search ====
The [[binary search]] algorithm is a method of searching an ordered array for a single element by cutting the array in half with each pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for.
Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass.
Example Implementation of Binary Search:
<source lang="C">
/*
Call binary_search with proper initial conditions.
INPUT:
data is a array of integers SORTED in ASCENDING order,
toFind is the integer to search for,
count is the total number of elements in the array
OUTPUT:
result of binary_search
*/
int search(int *data, int toFind, int count)
{
// Start = 0 (beginning index)
// End = count - 1 (top index)
return binary_search(data, toFind, 0, count-1);
}
/*
Binary Search Algorithm.
INTPUT:
data is a array of integers SORTED in ASCENDING order,
toFind is the integer to search for,
start is the minimum array index,
end is the maximum array index
OUTPUT:
position of the integer toFind within array data,
-1 if not found
*/
int binary_search(int *data, int toFind, int start, int end)
{
//Get the midpoint.
int mid = start + (end - start)/2; //Integer division
//Stop condition.
if (start > end)
return -1;
else if (data[mid] == toFind) //Found?
return mid;
else if (data[mid] > toFind) //Data is greater than toFind, search lower half
return binary_search(data, toFind, start, mid-1);
else //Data is less than toFind, search upper half
return binary_search(data, toFind, mid+1, end);
}
</source>
=== Recursive data structures (structural recursion) ===
An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time.
<blockquote>
"Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms." <ref>{{cite book
| last = Wirth
| first = Niklaus
| title = Algorithms + Data Structures = Programs
| pages = p127
| publisher=Prentice-Hall
| year =1976
}}</ref>
</blockquote>
The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that the recursive procedures are acting on data that is defined recursively.
<blockquote>
As long as a programmer derives the template from a data definition, functions employ structural recursion. That is, the recursions in a function's body consume some immediate piece of a given compound value. <ref>
{{Citation
| last = Felleisen
| first = Matthias
| contribution = Developing Interactive Web Programs
| year = 2002
| title = Advanced Functional Programming: 4th International School
| editor-last = Jeuring
| editor-first = Johan
| volume =
| pages = 108
| place = Oxford, UK
| publisher = Springer
| id = }}.</ref>
</blockquote>
==== [[Linked list]]s ====
Below is a simple definition of a linked list node. Notice especially how the node is defined in terms of itself. The "next" element of struct node is a pointer to a struct node.
<source lang="C">
struct node
{
int n; // some data
struct node *next; // pointer to another struct node
};
// LIST is simply a synonym for struct node * (aka syntactic sugar).
typedef struct node *LIST;
</source>
Procedures that operate on the LIST data structure can be implemented naturally as a recursive procedure because the data structure it operates on (LIST) is defined recursively. The printList procedure defined below walks down the list until the list is empty (NULL), for each node it prints the data element (an integer). In the C implementation, the list remains unchanged by the printList procedure.
<source lang="C">
void printList(LIST lst)
{
if (!isEmpty(lst)) // base case
{
printf("%d ", lst->n); // print integer followed by a space
printList(lst->next); // recursive call
}
}
</source>
==== [[Binary tree]]s ====
Below is a simple definition for a binary tree node. Like the node for Linked Lists, it is defined in terms of itself (recursively). There are two self-referential pointers - left (pointing to the left sub-tree) and right (pointing to the right sub-tree).
<source lang="C">
struct node
{
int n; // some data
struct node *left; // pointer to the left subtree
struct node *right; // point to the right subtree
};
// TREE is simply a synonym for struct node * (aka syntactic sugar).
typedef struct node *TREE;
</source>
Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers (left and right), that tree operations will require two recursive calls. For a similar example see the Fibonacci function and explanation above.
<source lang="C">
void printTree(TREE t) {
if (!isEmpty(t)) { // base case
printTree(t->left); // go left
printf("%d ", t->n); // print the integer followed by a space
printTree(t->right); // go right
}
}
</source>
The above example illustrates an [[Tree traversal| in-order traversal]] of the binary tree. A [[Binary search tree]] is a special case of the binary tree where the data elements of each node are in order.
===Recursion versus iteration===
In the "factorial" example the iterative implementation is likely to be slightly faster in practice than the recursive one. This is almost definite for the Euclidean Algorithm implementation. This result is typical, because iterative functions do not pay the "function-call overhead" as many times as recursive functions, and that overhead is relatively high in many languages. (Note that an even faster implementation for the factorial function on small integers is to use a [[lookup table]].)
There are other types of problems whose solutions are inherently recursive, because they need to keep track of prior state. One example is [[tree traversal]]; others include the [[Ackermann function]] and [[divide-and-conquer algorithm]]s such as [[Quicksort]]. All of these algorithms can be implemented iteratively with the help of a [[stack (data structure)|stack]], but the need for the stack arguably nullifies the advantages of the iterative solution.
Another possible reason for choosing an iterative rather than a recursive algorithm is that in today's programming languages, the stack space available to a thread is often much less than the space available in the heap, and recursive algorithms tend to require more stack space than iterative algorithms. However, see the caveat below regarding the special case of [[tail recursion]].
==Tail-recursive functions==
{{main|Tail recursion}}
Tail-recursive functions are functions ending in a recursive call that does not build-up any deferred operations. For example, the gcd function (re-shown below) is tail-recursive; however, the factorial function (also re-shown below) is "augmenting recursive" because it builds up deferred operations that must be performed even after the final recursive call completes. With a compiler that automatically optimizes tail-recursive calls, a tail-recursive function such as gcd will execute using constant space. Thus the process it generates is iterative and equivalent to using imperative language control structures like the "for" and "while" loops.
{| class="wikitable"
|-
! [[Tail recursion]]:
! Augmenting recursion:
|-
|<source lang="c">
//INPUT: Integers x, y such that x >= y and y > 0
int gcd(int x, int y)
{
if (y == 0)
return x;
else
return gcd(y, x % y);
}
</source>
|<source lang="c">
//INPUT: n is an Integer such that n >= 1
int fact(int n)
{
if (n == 1)
return 1;
else
return n * fact(n - 1);
}
</source>
|}
The significance of tail recursion is that when making a tail-recursive call, the caller's return position need not be saved on the [[call stack]]; when the recursive call returns, it will branch directly on the previously saved return position. Therefore, on compilers which support tail-recursion optimization, tail recursion saves both space and time.
==Order of function calling==
The order of calling a function may change the execution of a function, see this example in [[C (programming language)|C]] language:
===Function 1===
<source lang="c">
void recursiveFunction(int num) {
if (num < 5) {
printf("%d\n", num);
recursiveFunction(num + 1);
}
}
</source>
[[Image:RecursiveFunction1 execution.png]]
===Function 2 with swapped lines===
<source lang="c">
void recursiveFunction(int num) {
if (num < 5) {
recursiveFunction(num + 1);
printf("%d\n", num);
}
}
</source>
[[Image:RecursiveFunction2 execution.png]]
==Direct and indirect recursion ==
Direct recursion is when function calls itself. Indirect recursion is when (for example) function A calls function B, function B calls function C, and then function C calls function A. Long chains and branches are possible, see [[Recursive descent parser]].
==See also==
*[[Recursion]]
*[[Mutual recursion]]
*[[Anonymous recursion]]
*[[μ-recursive function]]
*[[Primitive recursive function]]
*[[Functional programming]]
*[[Kleene-Rosser paradox]]
*[[McCarthy 91 function]]
*[[Ackermann function]]
*[[Sierpiński curve]]
==Notes and References==
{{reflist}}
==External links==
* [http://mitpress.mit.edu/sicp/full-text/book/book.html Harold Abelson and Gerald Sussman: "Structure and Interpretation Of Computer Programs"]
* [http://www-128.ibm.com/developerworks/linux/library/l-recurs.html IBM DeveloperWorks: "Mastering Recursive Programming"]
* [http://www.cs.cmu.edu/~dst/LispBook/ David S. Touretzky: "Common Lisp: A Gentle Introduction to Symbolic Computation"]
* [http://www.htdp.org/2003-09-26/Book/ Matthias Felleisen: "How To Design Programs: An Introduction to Computing and Programming"]
* [http://www.go4expert.com/forums/showthread.php?t=10396 Details About Recursion and its Type]
[[Category:Theoretical computer science]]
[[Category:Recursion theory]]
[[Category:Articles with example Scheme code]]
[[Category:Articles with example C code]]
[[Category:Articles with example Pascal code]]
[[Category:Control flow]]
[[Category:Programming idioms]]
[[de:Rekursive Programmierung]]
[[fr:Algorithme récursif]]
[[pt:Recursividade (ciência da computação)]]
[[zh:遞歸 (計算機科學)]]