Tony’s Take September 2023

This month’s topics:

Move Over, Euclid?

In this article headed “Move Over, Euclid” for New Scientist, Kate Kitagawa explains that the history of mathematics tends to concentrate on Europe at the expense of mathematical knowledge from other places. For example, the Pythagorean Theorem was known not just by the ancient Greeks, but also in ancient Babylonia, Egypt, India and China. Liu Hui (3rd century CE) wrote commentary in the Nine Chapters on the Mathematical Art that “includes the earliest written statement of the theorem that we know of,” writes Kitagawa. Unless we happen to know about Euclid (died c. 270 BCE) and his Elements: Book I, Proposition 47 states that “in right-angled triangles the square on the side opposite the right angle equals the sum of the squares on the sides containing the right angle” (from David E. Joyce’s online edition).

A diagram of Liu Hui's proof. There is a square of side length a, with a right triangle with side lengths a and b and hypotenuse c fitted into the top left corner. The hypotenuse forms one side of a second square. The parts of the second square that do not overlap with the first are cut into various shapes and shaded.
The diagram for Liu Hui’s proof is lost. This reconstruction, based on one by Joseph W. Dauben (in Victor Katz and Annette Imhausen’s Sourcebook), illustrates Liu Hui’s in-out method of swapping in (hatched) and out (solid) pieces of area.

Lui Hui’s proof is a non-obvious cut-and-paste argument quite alien to Euclid’s way of working; it illustrates how the same mathematical truth can be discovered in completely separate cultures, and so provides a valuable lesson to students of the history of mathematics. The chronology is really irrelevant but it should be kept straight.

I missed a related item last April. Two high-school students discovered a new trigonometric proof of that same Pythagorean Theorem. The news was covered in The Guardian and by Leila Sloman for Scientific American. (Note: Leila Sloman edits this column.) What made this noteworthy was that the identity $\sin^2\theta + \cos^2\theta=1$ for any angle $\theta$, a basic element of trigonometry, is an application of Pythagoras’s Theorem, so at one time, it seemed that any such proof would have to be circular; the mathematician Elisha Loomis stated in his 1927 book The Pythagorean Proposition that no trigonometric proof would be valid. (In fact, since then, a couple of valid trigonometric proofs had been discovered, but this was a new one.) What proved Loomis wrong in this case was the Law of Sines (any triangle with sides of length $a, b, c$ and opposite angles $\alpha, \beta, \gamma$ must obey the identity $\frac{\sin\alpha}{a}=\frac{\sin\beta}{b}=\frac{\sin\gamma}{c}$) which is independent of the Pythagorean Theorem. The two students, Calcea Johnson and Ne’Kiya Jackson, used the Law of Sines, an intricate geometric construction (they call it the “Waffle Cone”) and a calculation involving infinite series to nail their result.

Their proof has not been published at the date of this writing, but an ingenious YouTuber was able to hazard a reconstruction based on a diagram of theirs that appears briefly in a video clip embedded in the Guardian article.

A right triangle, cut into an infinite number of ever-shrinking right triangles.
The “Waffle Cone” construction that Calcea Johnson and Ne’Kiya Jackson devised for their proof that $a^2+b^2=c^2$. Image by Tony Phillips.

How natural is 1+1=2?

Maybe not as natural as we think. “Diverse mathematical knowledge among indigenous Amazonians”, published August 21, 2023 in PNAS, investigates the origins of mathematics in humans. The authors, a team of seven led by David O’Shaughnessy and Steven Piantadosi of UC Berkeley, worked with communities of the Tsimane’, a Bolivian indigenous people “for whom formal schooling is comparatively recent in history and variable in both extent and consistency.”

The article reports on several studies. The first is a meta-analysis over several studies in which Tsimane’ children are asked to move a certain number of counters ($N$). This test is used to group children into knower levels according to how high an $N$ they respond to correctly. Those who make it to 8 are termed Full Counters.

Chart showing that the more years of education a child has, the more likely they are to be a "full counter".
This excerpt from Fig. S.2 in the article’s supplementary information gives the distribution of “knower levels” among Tsimane’ children aged 6-8. The article (cited above) PNAS 120 (35) e221599912 is published under a CC BY-NC-ND 4.0 DEED license (https://creativecommons.org/licenses/by-nc-nd/4.0/).

This study shows how for children accurate comprehension of the meaning of number-names depends critically on the number of years of education. For example, almost none of the 6-8-year-olds with less than one year of schooling were comfortable with cardinalities above 4. (For adults, the picture was different. Almost everyone with one or more years of education was a “full counter.”)

Another study involving addition shows a more complicated picture. As the authors tell us, many developmental theorists explain number learning as essentially modeled on the axiomatic system mathematicians have derived for the integers, starting from 0 and using the successor function $S(x)$ to define $1$ as $S(0)$, $2$ as $S(S(0))$ and so forth, so that the operation of “adding one” is a basic and natural thing. The authors tested this idea in a single particularly remote Tsimane’ village, presenting addition questions to adult participants who had little or no formal education. Each test item was posed in two ways: formally (using mathematical language, as in school) and as a more practical word problem, for example, one involving prices. They used a set of 12 augends (the $a$ in $a+b$) together with the addends (the $b$) 1 and 5.

The results of the addition study. Success rate on addend 1 problems is graphed in tan, addend 5 in blue. Multiples-of-five augends correspond to darker shading. Fig. 5 from the Open Access article PNAS 120 (35) e221599912. Published under CC BY-NC-ND 4.0 DEED license (https://creativecommons.org/licenses/by-nc-nd/4.0/).

The authors point out that $1+1$ was not obvious for everyone (although $+1$ sums overall got more accurate answers). They also point out how much better performance was when the augend was divisible by 5. The reason they give for this anomaly is commercial: in this village the base unit of sales and purchasing is most often 5 Bolivanos. At the end of the article, they refer to their work and to other investigations with other cultures as evidence supporting “the theory that number’s emergence was tied to more concrete, practical uses in specific cultural circumstances, rather than being predetermined by innate logic.”

Cortex neuron density distribution is lognormal.

Aitor Morales-Gregorio, Alexander van Meegen and Sacha J. van Albada (Jülich Research Centre, Jülich, Germany) investigated how neuron density varies among different mammals’ cerebral cortices. Their findings are reported in the August 15, 2023 issue of Cerebral Cortex. Sampling a natural parameter like neuron density in the cortex (or, for example, individual height in a human population) typically gives a range of values; the question is, how are those values distributed? This means, given an interval of possible values, determining the probability that the value for a given sample will lie in that interval. For these natural parameters the probability distribution is usually described by a probability density function $f(x)$, a positive function defined over the range of possible values so that the probability of a measurement landing in the interval $[a,b]$ is equal to the area under the graph of the function and over that interval. (So the total area under the graph has to be 1.)

A probability density function. The area under its graph (green curve) between $a$ and $b$ gives the probability that the measurement value will lie between $a$ and $b$. Image by Tony Phillips.

Many measurements of interest in the natural world are normally distributed (their probability density function has the same shape as the blue graph –the “bell-shaped curve”– in the image below). In particular, the Central Limit Theorem implies that, in general, if a large number of random samples are taken from a population, then the distribution of values will tend to be normally distributed. But this article’s authors found that this was not the case for the neuron density in samples from mammalian cerebral cortices from seven different species (humans, mice and five types of monkeys). The distribution turned out to be lognormal, with density function graph like the red curve below. The authors describe these findings as “a new organizational principle” of brain structure (“cortical cytoarchitecture”).

The lognormal density function is asymmetric, with a fatter right tail than the normal density function.
The lognormal density function (red graph) gives the distribution when the logarithms variable are normally distributed: the area under the red curve between $a$ and $b$ is equal to the area under the graph of the normal density function (blue curve) between $\ln a$ and $\ln b$. Image by Tony Phillips.

The authors analyzed data from studies published over the past 16 years covering one or more of the species in their survey. Experimental protocols varied; the easiest to describe was for the baboon: “the cortex was subdivided into small regions of approximately equal size and shape … ” with 142 samples taken.

results for baboon
The results for baboon. (a.) the histogram of the experimentally recorded neuron densities. (b.) the histogram of the logarithms of those densities. The second curve was identified at a high confidence level as a normal distribution, as would be expected if the first curve is lognormal. Image adapted from the Open Source article “Ubiquitous lognormal distribution of neuron densities in mammalian cerebral cortex” by the authors named above, Cerebral Cortex 33 9439-9449. Published under a Creative Commons license.