## This month’s topics:

### Math, discovered or invented?

The perennial question is brought up again by Robert J. Marks II on the *Evolution News* website, July 9, 2023. The evidence, Marks tells us, points towards discovery. I believe most mathematicians think that way too. The way we speak about “searching for a solution” to a problem implies that the solution exists, out there somewhere, and that we just have to find it. The evidence Marks refers to is the occurrence of simultaneous discoveries. “Many mathematical breakthroughs are sometimes independently reported by two or more mathematicians at roughly the same time.” He gives several examples, the most famous being the discovery/invention of differential and integral calculus by Newton and Leibniz in the 17th century and the discovery/invention of non-Euclidean geometry by Gauss, Bolyai and Lobachevsky in the early 19th.

Here is another example that I think should be better known. In 1758 Euler’s publication of his observation that the numbers $S$ of vertices, $A$ of edges and $H$ of faces of a convex polygonal solid must satisfy $S-A+H=2$ (Propositio IV) included, as he described it, the completely equivalent statement: in such a solid, the sum of all the face angles is equal to $(4S-8)$ right angles, i.e. $2\pi S – 4\pi$ (Propositio IX). In his introduction he remarked how surprising it was that in all the years geometry had been studied, these most basic elements of solid geometry had remained undiscovered. But there he was wrong. Back in 1621 Descartes had formulated “Propositio IX” in almost the same terms. This work was never published and in fact was lost, but not before Leibniz had transcribed it (around 1675) from the papers Descartes left behind. It still remained unknown until it was discovered among Leibniz’s papers towards the middle of the 19th century. More details here.

That two mathematicians, over a hundred years separated in time, should write down the same theorem in almost exactly the same words seems to me strong evidence that mathematical truth exists independently of us, waiting to be discovered.

### Million-dollar disproof.

Usually prizes are offered for proofs, but now Dwango founder Nobuo Kawakami is offering a potential $1 million for a paper that finds a flaw in one. Manon Bischoff reports the story in *Scientific American*, July 28, 2023.

Kawakami’s offer targets Shinichi Mochizuki’s notoriously inscrutable 2012 proof of the $abc$ conjecture. Mochizuki is a distinguished mathematician at Kyoto University and the $abc$ conjecture is hugely important in number theory; nevertheless, its status is still not completely settled (an Editor’s Note appended to Bischoff’s article states that “a majority of number theorists believe the conjecture remains unproved”).

The conjecture, dating from 1985, is due to Joseph Oesterlé and David Masser. As Bischoff observes, “The special thing about the conjecture is that it combines the additive and multiplicative properties of natural numbers.” Here is a plain-language approximation, adapted from *The Conversation*:

- Suppose $a, b$ and $c$ are relatively prime (no common factors) whole numbers with $a+b=c$, and write them as products of primes. If $a$ and $b$ consist of large powers of small primes, then $c$ can be expected to be made up of small powers of large primes.
- They include the following suggestive example. $a = 2^{10} = 1024$, $b=5^8 = 390625$, $c = 391649 = 457\times 857$, the product of two large primes.
- Readers can use $4 + 25 = 29$, $16 + 25 = 41$, etc. to experience this somewhat spooky phenomenon. The actual statement of the conjecture (available conveniently here) involves giving an exact mathematical formulation for “small,” “large” and “can be expected to be.”

The reason for the $abc$ conjecture’s importance is its central position in number theory. Its proof would cast new light, for example, on Fermat’s Last Theorem (formulated in 1637 and proved in 1995). So, understanding the 500 pages of Mochizuki’s argument (based, as Bischoff informs us, on another 500 pages of his previous work) became critical for the mathematical community. But the best and the smartest, including Peter Scholze (2018 Fields medalist), were not able to plow through and check that it worked. Scholze and his colleague Jakob Stix went to Japan to confer with Mochizuki but were not convinced (they published Why $abc$ is still a conjecture in 2018). A revised version of Mochizuki’s manuscript was formally published in 2021, but Scholze maintains in his *Zentralblatt* review of the manuscript that the argument is still incomplete. This impasse is the context for Kawakami’s prize offer.

### The ninth Dedekind number.

Posted on the *Popular Mechanics* website July 13, 2023 was “After a 32-Year Search, Mathematicians Have Found the Elusive D(9) Number,” by Jackie Appel. The calculation was announced in two essentially simultaneous publications: one by Christian Jäkel (Dresden), the other from a Paderborn University and KU Leuven team led by Lennart Van Hirtum. (They have not been reviewed, but their having come up with the same 42-digit number by different methods makes their joint accuracy very plausible).

What is this number? Formally, the $n$-th Dedekind number is the number of distinct monotone Boolean functions of $n$ variables.

- A
*Boolean function*of $n$ variables has input a string of $n$ binary digits (each one 0 or 1) and output a binary digit. A Boolean function is*monotonic*if changing an input 0 to a 1 can never change the output from a 1 to a 0. For example, if a Boolean function has $f(0,1) = 1$, then $f(1,1) = 1$ also. This matches the usual idea of a monotonic function if we accept that changing an input 0 to a 1 changes the input $n$-tuple to a “larger” one, and consequently forbids a smaller output. The number of distinct monotone Boolean functions of $n$ variables is called the $n$th*Dedekind number*$D(n)$. For one variable there are four possible functions:

$$f(0)=0, f(1)=0;~~~g(0)=0, g(1)=1;~~~h(0)=1, h(1)=0;~~~i(0)=1, i(1)=1.$$

The only non-monotonic one is $h$, so $D_1=3$.

Appel gives us a more pictorial explanation, due to Van Hirtum, who talks of a game with an $n$-dimensional cube. [This is a mathematical figure of speech. A $0$-dimensional “cube” is a point; a $1$-dimensional cube is a line segment; a $2$-dimensional one is a square; a $3$-dimensional one is an ordinary cube; a model for an $n$-dimensional one is the set of points in $n$-space with coordinates between $0$ and $1$—its *corners* are exactly the set of all strings of $n$ binary digits.] In Van Hirtum’s explanation, the $n$-cube is balanced on its lowest corner (the corner corresponding to the $n$-tuple with all $0$s) in such a way that larger-$n$-tple corners appear higher in the picture.

Then Van Hirtum visualizes a monotone Boolean function of $n$ variables as a coloring of the vertices of the $n$-cube white (for zero) or red (for 1) so that you never place a white corner above a red one. The $n$-th Dedekind number $D(n)$ is the number of different colorings obeying that rule.

It was a big job, even for state-of-the-art computers and algorithms. Jäkel reports his computer having taken 5311 hours, while Van Hirtum mentions “three months real-time.” Given that the *number of digits* in the $n$th Dedekind number grows exponentially with $n$ (they are tabulated here) it may take us a very long time to get to $D(10)$. This calculation was also covered by Harry Baker for Livescience.com.