This month’s topics:
How old is the decimal point?
“Decimal fractional numeration and the decimal point in 15th-century Italy”, by Glen Van Brummelen (Trinity Western University) was posted online by Historia Mathematica on February 17, 2024, and picked up two days later by Jo Marchant in a Nature news item whose headline proclaimed “The decimal point is 150 years older than historians thought.”
Before Van Brummelen’s paper, historians thought the first appearance of the decimal point was in a table published by the Jesuit German astronomer Christopher Clavius in 1593. Now, we’ve learned from Van Brummelen that it can be traced back to trigonometric tables written by Giovanni Bianchini, a Venetian mathematician, around 1440. Bianchini’s tables have been digitized by the Jagellonian Library in Kraków; the decimal points Van Brummelen noticed are on the page numbered 108.
For some perspective, the “Treviso Arithmetic,” published in 1478 (the first known printed, dated arithmetic book) has no trace of this notation.
Below, you can see an excerpt from Bianchini’s table. The first column gives angles in degrees and minutes. The second gives corresponding values of the tangent function multiplied by 10,000 [my calculator gives $\tan$(68$^\circ$) = 2.47508]. The third column, where the decimal points occur, gives the increment corresponding to an additional minute of arc. [Since 21.2 = (25,387 – 24,751)/30, etc., this sets up linear interpolation between the whole-degree and half-degree values]. So for 68$^\circ$5′ the table would give 24,751 + 5 $\times$ 21.2 = 24,857. My calculator ($\times$ 10,000) gives 24,855.
Besides illustrating this early occurrence of the decimal point, Van Brummelen’s article points to the paradoxical situation of calculation in Europe in the early 15th century. While merchants and bankers were struggling with the transition from Roman numerals to Indo-Arabic notation, and choosing the best way to do long division, astronomers and mathematicians were computing trigonometric tables with four or five significant digits of accuracy.
The topology of telepathy.
A team in Japan has used concepts from graph topology to probe the way in which our brains synchronize with those of people we are interacting with. The experiment, reported in “The topology of interpersonal neural network in weak social ties” (Nature Scientific Reports, February 29, 2024), focused on the difference between interaction with a stranger and with an acquaintance. The authors, Yuto Kurihara, Toru Takahashi and Rieko Osu of Waseda University, explain that each run of the experiment involved a pair of participants. In total, there were 14 pairs of strangers and 13 pairs of acquaintances. Each participant wore headphones and a wireless 29-channel electroencephalogram (EEG) headset; they faced away from each other; each controlled a computer mouse which when clicked produced a tone audible to both of them. They were instructed to listen to a sequence of eight equally-spaced tones produced by a metronome and then to continue the sequence in alternation: participant 1, then participant 2, then participant 1, up to 300 clicks (150 each), matching the metronome’s inter-tap interval (ITI) as closely as possible.
The experiment was run for each pair under four conditions, including slow tap (ITI = 0.5s) and fast tap (ITI = 0.25s). The EEG records were filtered into three bands: waves of low (theta), medium (alpha) and high (beta) frequency, which were analyzed separately. In each case a 58$\times$58 matrix of inter/intra-brain synchronization was set up with an entry for each pair of monitored EEG channels (there are 29 + 29 = 58 channels in all), including pairs from the same participant. After processing to remove background effects, the matrix entries were reduced to 1 if the synchrony between the channels was significantly higher than the background and 0 otherwise. These data were used to construct a graph with one vertex for each of the 58 channels and links between the pairs corresponding to 1s in the matrix.
The team report that for the “fast tap” condition the theta band graph shows a significant difference between the performance of stranger pairs and acquaintance pairs of subjects. The graph-theoretic criterion used for this distinction is the local efficiency, first introduced in a Physical Review Letters paper in 2001. This is the average of the local efficiency at a particular vertex $v$, which is calculated as follows: Look at the neighbors of $v$, which form a subgraph $G_v$. For each pair of vertices in $G_v$, calculate one over the length of the shortest path in $G_v$. (Notice that when it is easy to get from one vertex to another, there will be short paths, making this number high.) The local efficiency at $v$ is the average of this number over all pairs in $G_v$. The local efficiency overall is the average local efficiency over all $v$.
In their Discussion section, the authors cite previous research showing a correlation between interpersonal interactions and activity at lower EEG frequencies. They further remark that interacting with a stranger, a more novel experience, will require more processing and more retention than interacting with an acquaintance. The question why exactly local efficiency should be the criterion that exhibits that difference is not, to my understanding, addressed.
“Observer problems” in mathematics.
Edward Frenkel (University of California, Berkeley) posted “Maths, like quantum physics, has observer problems” on the Institute for Art and Ideas website, February 6, 2024. Frenkel starts by reminding us how in the quantum world, different experimental setups can lead to seemingly contradictory results, with electrons sometimes looking like particles and at other times like waves.
How could something like this happen in mathematics? The answer is that the mathematics we do depends ultimately on the axioms we choose as a base for our logic, and that there is no way to check that we have a completely satisfactory set of axioms.
Frenkel focuses on set theory, the system that provides the language of mathematics. It has its axioms. The current set used by most of us is called ZFC (Zermelo-Fraenkel set theory with the Axiom of Choice) but a minority (Frenkel estimates 1%) reject one of the axioms: the axiom of infinity. The axiom of infinity states that the set 1, 2, 3, … of natural numbers exists; as Frenkel explains, this is much stronger than the statement that for every natural number there is a bigger number–there, you are always talking about some finite number. Accepting the natural numbers as a set means that there can be a set with infinitely many elements and some, the “finitists,” find this disturbing and refuse to do it. (It is disturbing, if you stop to think about it.) And as Frenkel explains, Gödel’s Second Incompleteness Theorem means it is impossible to know how the axiom of infinity affects the soundness of mathematics.
Why does this matter? It turns out that there are some theorems about finite sets that cannot be proved without the axiom of infinity. One of them (see below) is a strengthened general version of the “Cocktail Party theorem” we discussed last November. In 1977 Jeff Paris and Leo Harrington proved that this was a theorem, but that its proof made essential use of the axiom of infinity. Using the language of quantum physics, in one “setup” the statement is true while in the other it is unprovable. Whether the “observer” is a finitist or not makes a difference.
The theorem in question is the strengthened finite Ramsey theorem. To start working towards the statement, let’s go back to the “Cocktail Party theorem” for a moment. It is based on the following special case of the (unstrengthened) finite Ramsey theorem: suppose $N$ points are joined two by two by lines colored either red or blue. Then if $N$ is large enough any such configuration must include a red triangle or a blue triangle. (In this special case $N=6$ is large enough, and that’s the “Cocktail Party theorem.”) The finite Ramsey theorem is about sets, but it is convenient for us to continue to “visualize” it in terms of geometric objects, which here are $k$-simplexes. A $0$-simplex is a point, a $1$-simplex is a line segment, a $2$-simplex is a triangle, a $3$-simplex is a tetrahedron. In general a $k$-simplex consists of $k+1$ vertices and all the line segments, triangles, tetrahedra, etc. spanned by any pair, triple, 4-tuple, etc. of those points. Those sub-simplexes are called the faces of the $k$-simplex: 0-faces (vertices), 1-faces (edges), 2-faces (triangular faces), etc. For the general statement of the theorem we start with three numbers $n,m$ and $c$ with $m\geq n$, and an $N$-simplex we’ll call ${\bf K}$. We start by coloring every $n$-face of ${\bf K}$ with one of the $c$ colors. The finite Ramsey theorem states that if $N$ is large enough then ${\bf K}$ has a face ${\bf Y}$ of dimension at least $m$, all of whose $n$-faces are of the same color. (In the context of the “Cocktail Party theorem,” $n=1$ and $m=c=2$. If $\bf{K}$ has at least $6$ points, then there is a $2$-face all of whose $1$-faces—interpreted as edges of the triangle—are the same color.)
The finite Ramsey theorem can be proved without using the axiom of infinity. The strengthened version cannot. It involves identifying the $N+1$ vertices of ${\bf K}$ with the integers $1, 2, \dots, N+1$, and it reads the same as above except that ${\bf Y}$ can be required to have the additional property that its dimension is greater than the smallest of the integers corresponding to its vertices.