We would like to clear out every room over time, but asking guests to leave seems rude. Instead, every day we assign them new rooms using a monotone increasing function…

Primitive Recursion and the Disappearing Guests

Bevin Maultsby
NC State University

Hilbert’s Hotel is a well-known thought experiment which challenges our understanding of countable infinity. The classic story focuses on how a hotel with no vacancies can accommodate new guests. I want to turn the problem around and ask: how might we gradually empty out such a hotel—without asking anyone to leave? As we consider this question, we will introduce ideas from computability theory, a field concerned with evaluating functions using algorithmic procedures.

Imagine Hilbert's Hotel has a countably infinite number of rooms numbered \( \{1, 2, 3, \dots\} \), and all are occupied. And yet, we welcome additional guests. The usual move is to apply a function to each guest’s room number to send them to a new one. I like to imagine that the higher the room number, the better the upgrade—otherwise, there might be complaints!

For example, if one new guest arrives, we shift every guest from Room \( n \) to Room \( n+1, \) freeing Room 1. We could iterate this process every time a finite collection of new guests show up. If a countably infinite number of guests check in, we double the room number for each current guest to free up the odd-numbered rooms.

If an infinite fleet of buses \( \{B_1, B_2, B_3, \ldots \} \) arrives, each carrying infinitely many passengers \( \{ P_{1}, P_{2}, P_{3}, \dots\}, \) we need a new approach. First, the current guests from Rooms \( \{1,2,3,\ldots\}\) move to Rooms \( \{5^1,5^2,5^3,\ldots\}, \) freeing all rooms which are not powers of 5. Then each passenger \( P_{j} \) from bus \( B_{i} \) checks in to Room \( 2^{i} \cdot 3^{j}. \) Everyone gets a room.

Black-and-white photo of the German mathematician Hilbert wearing a light-colored hat with a dark band.
David Hilbert in 1912.

Opening up a new hotel

A new Hilbert's Hotel has just been built with a countably infinite number of empty rooms. After ribbon cutting, the first day is disappointing: only one guest arrives. On Day 2, another guest arrives. On Day \(N,\) we have \(N\) total visitors in the hotel.

How would you place these guests? Your first inclination might be to assign rooms chronologically by placing new arrivals in the next available rooms: on Day \( N,\) the occupied room numbers are \(\{1,2,\ldots,N\}.\) On the other hand, familiarity with this hotel chain may inspire you to place each guest in an even-numbered room, leaving the odd-numbered rooms free.

New idea: let’s arrange the guests so that eventually every room is free. On Day 1, we place the first guest in Room 1. On Day 2, we upgrade the guest from Room 1 to Room 2 and place the new guest in Room 3. On Day 3, the three guests occupy Rooms 3-5, and on Day 4, Rooms 4-7, see the illustration below.

What happens as \(N \to \infty?\) With the first approach, each room is eventually occupied. With the second, all the even-numbered rooms are eventually occupied. However with the third method, something surprising happens.

As \(N \to \infty\), the set of occupied rooms becomes...

A grid of empty (white) and filled (red) doors. At each step, the number of red doors increases and shifts to the right.
The red doors are the occupied rooms on Days 1-5. Image by the author with free door graphic from Canva.

...\(\emptyset. \) Where did everyone go?

Exercise 1: Compute these intersections: \( \bigcap_{n=1}^\infty [n,\infty) \) and \( \bigcap_{n=1}^\infty (0,\frac{1}{n}). \) These exercises illustrates the danger of trying to "induct to infinity."

Closing down a full hotel

Suppose instead that we are trying to shut the hotel down. We would like to clear out every room over time, but asking guests to leave seems rude. Instead, every day we assign them new rooms using a monotone increasing function \( f: \mathbb{N} \to \mathbb{N}. \) Each guest moves to their next room assignment by evaluating \( f \) on their current room number. (To respect our guests' privacy, we will also use injective functions, though that is not necessary for the general discussion.)

Each room map should be a computable function. A function \( f: \mathbb{N} \times \mathbb{N} \times \cdots \times \mathbb{N} \to \mathbb{N} \) is computable if there exists a finite set of instructions—essentially an algorithm or computer program—that produces an output for any input in a finite number of steps. More formally, this notion underpins the Church-Turing thesis: a function is computable by an algorithm if and only if it is computable by a Turing machine. (For a look at computability and Turing machines, I recommend the Feature Columns Alan Turing and the Countability of Computable Numbers by Adam A. Smith and People and Computers Compared by Joe Malkevitch.)

If such a function is only defined on some inputs (that is, for some guests, the algorithm does not "halt" on a new room number), we call it a partial function. However, all the functions we consider below are total: they are defined for every input.

We often use recursion to define computable functions by specifying an initial output and then describing how to compute the function at \( n+1 \) based on its value at \( n. \) In keeping with computability theory convention, we include \( 0 \in \mathbb{N}; \) this number is the seed for our recursions (it is not necessary to imagine a Room 0 in the hotel).

Because our functions are total and computable, we can iterate them repeatedly to see how guests move over time:
\[
f(n),\quad f\left(f(n)\right),\quad f\left(f\left(f(n)\right)\right),\quad \dots
\]
We denote by \( f^{(k)}(n) \) the \(k\)th iteration of \(f\) applied to \(n\), with the convention that \( f^{(0)}(n) = n \) and \( f^{(k+1)}(n) = f(f^{(k)}(n)) .\) At each step \( k, \) the set of all room numbers currently occupied by guests is \( \left\{ f^{(k)}(n) : n \in \mathbb{N} \right\}. \)

Room \( m \) is freed at step \( k \) if for all \( n \in \mathbb{N},\) \( f^{(k)}(n) > m. \) In other words, once every guest has moved beyond room \( m, \) we consider that room permanently vacated.

Successor Function:
This function is exactly what we do to Hilbert's Hotel to free Room 1. Written recursively, the successor function (usually named \(S\)) is
\begin{align*}
S(0) &= 1 \\
S(n+1) &= S(n) + 1
\end{align*}
Repeat applications produce \( S^{(k)}(n) = n + k. \) To determine when Room \( m \) is freed, we look for the smallest \( k \) such that
\[ n + k > m \quad \text{for all } n \in \mathbb{N}. \]
This condition is satisfied whenever \( k \geq m, \) so Room \( m \) is freed on Day \( m. \) This map has a linear growth rate.

General addition can be built on top of this function and, consequently, multiplication (discussed in the next section). As using addition to define addition feels redundant, we treat the successor function as a building block from which other recursive functions are built.

Doubling Function:
Let \( D(n) = 2n, \) the room doubling algorithm. This function is computable because multiplication can be interpreted as repeated addition and defined recursively as follows:
\begin{align*}
D(0) &= 0 \\
D(n+1) &= D(n) + 2.
\end{align*}
This time, repeatedly upgrading the guest in Room 1 produces $$ 1 \longmapsto 2\longmapsto 4\longmapsto 8\longmapsto 16 \longmapsto \cdots $$ A bit faster (this map has an exponential growth rate), but our hotel is slow to empty. Room \( m \) is free on Day \( \lceil \log_2 m \rceil. \)
Tower Growth:
Given a fixed base \( a \in \mathbb{Z}^+, \) a recursive formulation for exponentiation is:
\begin{align*}
\exp(a, 0) &= 1 \\
\exp(a, n+1) &= a \cdot \exp(a, n).
\end{align*}
With exponentiation we build a function which grows much faster than exponential.

Exercise 2: Our next room map is
\begin{align*}
T(0, n) &= n \\
T(k+1, n) &= \exp(2, T(k, n))
\end{align*}
Check your understanding of recursion by rewriting this function with a closed-form expression.

The leaning tower of Pisa next to iterated version of the function.
Breaking down the computation of \(A(4,n).\) Image by the author with free tower graphic from Canva.

Solution to Exercise 2: As the above image demonstrates, you can keep calling \( T \) to derive the closed form, or you can build up the first few levels to see the pattern:
\begin{align*}
T(0, n) &= n \\
T(1, n) &= \exp(2, T(0, n)) = \exp(2, n) = 2^n \\
T(2, n) &= \exp(2, T(1, n)) = \exp(2, 2^n) = 2^{2^n} \\
T(3, n) &= \exp(2, T(2, n)) = \exp(2, 2^{2^n}) = 2^{2^{2^n}}.
%\\f(4, n) &= \exp(2, T(3, n)) = \exp(2, 2^{2^{2^n}}) = 2^{2^{2^{2^n}}}.
\end{align*}
In general,
\[ T(k, n) = \underbrace{2^{2^{\cdots^{2^n}}}}_{k \text{ copies of } 2}. \]
This exponential stacking is called a power tower or tetration. Let's see how tetration relocates the guest in Room 1: $$ 1 \longmapsto 2 \longmapsto 4 \longmapsto 16 \longmapsto 65536 \longmapsto 2^{65536} ~ (\text{19,729 digits}) \longmapsto \cdots $$

The number of steps \( k \) required to free Room \( m \) satisfies \( k > \log^* m,
\) where \( \log^* m \) is the iterated base-2 logarithm. Tetration grows at a superexponential rate.

Yet, for each input, the amount of recursion is determined in advance: to compute \( T(k+1, n), \) we perform a single exponentiation using the value of \( T(k, n). \) Running tetration as a recursive computer program would require exactly \( k+1 \) calls on \(T.\) This ability to know how many recursions are needed for an input is our first glimpse at primitive recursion.

Primitive Recursion

Our successor function \(S\) is a building block function. We add two more function types to the building block library: the zero function \( \displaystyle Z(n) = 0, \) and projection functions \( \displaystyle P_i^j(n_1, \dots, n_j) = n_i. \)

With these basic functions, we build new computable functions (like \(D\) and \(T\)). Letting \( \vec{n}_j = (n_1,\ldots,n_j), \) we use two formal operations:

  1. Composition \(\circ ~ \): Given functions \( g(\vec{n}_m), \) \( h_1(\vec{n}_j), \, \ldots, \) \( h_m(\vec{n}_j)\) define \( f:\mathbb{N}^j \to \mathbb{N} \) by $$ f(\vec{n}_j) = g\left(h_1(\vec{n}_j), \dots, h_m(\vec{n}_j)\right). $$
  2. Primitive Recursion: Given a basis function \( h(\vec{n}_j) \) and iterator \( g(m,\vec{n},\ell), \) define \( f:\mathbb{N}^{j+1} \to \mathbb{N} \) by
    \begin{align*}
    f(0, \vec{n}_j) &= h(\vec{n}_j) & \text{Base case} \\
    f(m+1, \vec{n}_j) &= g(m,\vec{n}_j,f(m,\vec{n}_j)) & \text{Recursion}
    \end{align*}

For practice with the notation, here are two more example functions:

\begin{align*}
\operatorname{add}(0, n) &= h(n) = P_1^1(n) = n \\
\operatorname{add}(m+1, n) &= g(m,n,\operatorname{add}(m, n)) = S\circ P_3^3(m,n,\operatorname{add}(m, n)), \\
\operatorname{mult}(0, n) &= h(n) = Z\circ P_1^1(n) = 0 \\
\operatorname{mult}(m+1, n) &= g(m, n, \operatorname{mult}(m, n)) = \operatorname{add}(n, \operatorname{mult}(m, n)).
\end{align*}

You can see how these functions fit into our function family tree below. The set of all computable total functions built this way is called the class of primitive recursive functions. From a computational point of view, we code primitive recursions using only "for loops." Namely, the number of recursive calls is predetermined by the input. As a result, we can predict when a primitive recursive function will halt.

A tree diagram indicating the relationships between different functions.
A map of the primitive recursive functions seen so far. Image by the author.

A natural question arises: which computable functions can be built this way, and which cannot?

Exercise 3: When did we see \( p(i,j)? \) Can you add another layer to that story and solve it with primitive recursive functions?

Exercise 4: Show that positive constant functions \( n\mapsto C, \) the factorial function \( n \mapsto n!, \) and the predecessor function \( \operatorname{pred}(n) = \max\{0,n-1\} \) are all primitive recursive.

The Ackermann function

In the 1920s, as David Hilbert was working to formalize mathematics, it was unknown whether primitive recursive functions could account for all total computable functions. In 1928, a student of Hilbert named Wilhelm Ackermann (a future high school teacher) constructed this total recursive function:
$$ A(m,n) =
\begin{cases}
n+1 & \text{if } m = 0, \\
A(m-1,1) & \text{if } m > 0 \text{ and } n = 0, \\
A(m-1, A(m,n-1)) & \text{if } m > 0 \text{ and } n > 0.
\end{cases} $$
(This form is credited to Rózsa Péter and Raphael Robinson.)

Black-and-white photo of the German mathematician Wilhelm Ackermann wearing a striped tie.
Wilhelm Ackermann in 1935.

To get a sense for how Ackermann's function behaves, evaluate as much as you can in the following table:
$$ \begin{array}{c||c|c|c|c|c}
A(m,n) & n=0 & n=1 & n=2 & n=3 & n=4 \\ \hline \hline
m=0 & 1 & 2 & 3 & 4 & 5 \\ \hline
m=1 & A(0,1) & A(0,A(1,0)) & A(0,A(1,1)) & A(0,A(1,2)) & A(0,A(1,3)) \\ \hline
m=2 & A(1,1) & A(1,A(2,0)) & A(1,A(2,1)) & A(1,A(2,2)) & A(1,A(2,3)) \\ \hline
m=3 & A(2,1) & A(2,A(3,0)) & A(2,A(3,1)) & A(2,A(3,2)) & A(2,A(3,3)) \\ \hline
m=4 & A(3,1) & A(3,A(4,0)) & A(3,A(4,1)) & A(3,A(4,2)) & A(3,A(4,3)) \\ \hline
\end{array} $$

You probably felt exhausted by the \( m = 2 \) row. The recursive calls for the Ackermann function cascade from one row to the next, so can \(A\) be computed with only for loops? For a hint, consider this chart showing how many times the function \(A\) is called for some inputs:

$$ \begin{array}{c||c|c|c|c|c|c}
\text{Calls} & n=0 & n=1 & n=2 & n=3 & n=4 & n=5 \\ \hline \hline
m=0 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
m=1 & 2 & 4 & 6 & 8 & 10 & 12 \\ \hline
m=2 & 5 & 14 & 27 & 44 & 65 & 90 \\ \hline
m=3 & 15 & 106 & 541 & 2,432 & 10,307 & 42,438 \\ \hline
\end{array} $$

I was unable to compute these numbers for \( m = 4 \) on my computer (sorry if you tried by hand!), but fortunately you can examine \( A(4,2)\) with WolframAlpha here: Ackermann function of 4,2. Eagle-eyed readers might recognize the number of digits from earlier.

This gives us a plausibility argument that \(A\) is not primitive recursive. Returning to our hotel, we measured how many steps it took for all guests to move beyond a given room number \( m. \) For the successor function, this took \( m \) steps. For the doubling function, it took about \( \log_2 m. \) For tetration, it took about \( \log^* m \) steps.

For the Ackermann function, the growth is extreme. If we move our first guest using the diagonal terms \( A(n,n), \) their room numbers are $$ 1 \longmapsto A(1,1) = 3 \longmapsto A(3,3) = 61 \longmapsto A(61,61) = \text{massive} \longmapsto \ldots $$ This is technically computable, but it eclipses the superexponential growth of tetration. Just two iterations on the second guest, \( 2 \longmapsto A(2,2) \longmapsto A(7,7) \) is hopeless:

\begin{align*}
A(7,7) &= A(6, A(7,6)) \\
&= A(6, A(6, A(7,5))) \\
&= A(6, A(6, A(6, A(7,4)))) \\
&= A(6, A(6, A(6, A(6, A(7,3))))) \\
&= A(6, A(6, A(6, A(6, A(6, A(7,2)))))) \\
&= A(6, A(6, A(6, A(6, A(6, A(6, A(7,1))))))) \\
&= A(6, A(6, A(6, A(6, A(6, A(6, A(6, A(7,0)))))))) \\
&= A(6, A(6, A(6, A(6, A(6, A(6, A(6, A(6, 1)))))))) \\
&~ \text{ I give up.}
\end{align*}

We will not prove the Ackermann function is not primitive recursive (Hilbert suspected this, and Ackermann proved it), but here is a key observation:

\begin{align*}
A(0,n) &= S(n) = n+1 & A(0,0) = 1 \\
A(1,n) &= \operatorname{add}(n,2) = n+2 & A(1,1) = 3 \\
A(2,n) &= \operatorname{add}(D(n),3) = 2n+3 & A(2,2) = 7 \\
A(3,n) &= \operatorname{pred}^3\circ \exp(2,\operatorname{add}(n,3)) = 2^{n+3}-3 & A(3,3) = 61 \\
A(4,n) &= \operatorname{pred}^3 \circ T(\operatorname{add}(n,3),1) = \underbrace{2^{2^{\cdots^{2}}}}_{n+3 \text{ copies}} - 3 & A(4,4) = 2^{2^{2^{65536}}} - 3
\end{align*}

Rows \( m = 0 \) and \( m = 1 \) build addition, \( m = 2 \) doubling, \( m = 3 \) exponentiation, and \( m = 4 \) tetration. Each increase in \( m \) introduces additional complexity.

It turns out that for any primitive recursive function \( f(n), \) there exists some threshold \( N \in \mathbb{N} \) so that if \( n \geq N, \) then \( A(n,n) > f(n). \) That is, the diagonal Ackermann function eventually dominates every function in the list of primitive recursive functions and therefore cannot belong to the list itself. This is a diagonalization argument similar in spirit to Cantor's famous proof of the uncountability of \( (0,1). \)

It is now time for us to leave Hilbert’s Hotel. I hope our visit gave you some new twists on this classic to think about and a feel for how recursion can lead to explosive growth.

Resources

Ackermann, Wilhelm. Zum Hilbertschen Aufbau der reellen Zahlen. Mathematische Annalen, vol. 99, no. 1, 1928, pp. 118-133.

Cutland, Nigel. Computability: An Introduction to Recursive Function Theory. Cambridge University Press, 1980.

Péter, Rózsa. Konstruktion nichtrekursiver Funktionen. Mathematische Annalen, vol. 111, no. 1, 1935, pp. 42-60.

Péter, Rózsa. Recursive Functions. Academic Press, 1967.

Robinson, Raphael M. Recursion and Double Recursion. Bulletin of the American Mathematical Society, vol. 54, no. 10, 1948, pp. 987-93.

Tourlakis, George J. Computability. Reston Pub. Co., 1984.

Leave a Reply

Your email address will not be published. Required fields are marked *

HTML tags are not allowed.

64,141 Spambots Blocked by Simple Comments