Daniel Ellsberg and the Science of Extortion

I don't believe that responsible people should indulge in anything that can be even remotely considered ultimatums or threats. That is not the way to reach peaceful solutions.—President Eisenhower, July 8, 1959.

Daniel Ellsberg and the science of extortion

Bill Casselman
University of British Columbia

Daniel Ellsberg died just last summer, on June 16, 2023. It was an occasion for obituaries. He was famous because in the late 1960s and early 1970s he made copies of thousands of secret, classified papers (the "Pentagon Papers") documenting the catastrophic failure of the American prosecution of the war in Vietnam, and then released them to public scrutiny. This led eventually, by a remarkable and circuitous route, to the resignation of Richard Nixon as President of the United States. Even more remarkably, it did not lead to a criminal conviction of Ellsberg.

A white man in a suit standing at a podium with several microphones

Daniel Ellsberg at a press conference in 1972

The obituaries didn't say much about Ellsberg's early career, but in fact it was quite interesting—in the twilight zone, you might say, where economics, mathematics, and even military strategy overlap. Many obituaries did say that he had been involved with game theory, the mathematical child of John von Neumann and Oskar Morgenstern. This was presumably because of his employment at the RAND Corporation, a research institute funded by the U.S. Air Force that was well known for its investigations into, and publications about, applications of game theory. But Ellsberg's involvement with the technical aspects of game theory was in fact extremely weak. Instead, like his sometime colleague Thomas Schelling, he was more interested in what the mathematics could suggest, rather than compute. They were both interested in the general question: How do people in a conflict make decisions, of an economic nature or otherwise? In particular, what useful theory can one propose about how people deal with extortionate threats? At that time, the most prominent threats involved nuclear warfare, but there were others of great interest.

Schelling had published in 1956 An essay on bargaining, one of the most influential economics papers of the twentieth century, and went on to be awarded the 2005 Nobel Memorial Prize in Economic Sciences. Ellsberg's contributions to economics were never so significant, but his own research had much in common with Schelling's, and was largely independent of it. But whereas Schelling went on to a productive if more conventional academic career, Ellsberg pursued access to power.

It will be useful to keep in mind the following timeline:

  • 1952: Ellsberg graduates A. B. summa cum laude in economics from Harvard College. Nominated for a Junior Fellowship at Harvard.
  • 1952-53: But instead attends Cambridge University for one year.
  • 1953-54: Declines again to pursue a Junior Fellowship, and instead enrolls as an officer candidate in the U.S. Marine Corps. Serves for three years, eventually becoming a company commander.
  • 1957: Leaves the Marines, takes up his Junior Fellowship. Attends summer school course in the mathematics of probability at Stanford. Visits the RAND Corporation informally, invited to return officially as consultant the following summer.
  • Summer 1958: Spends the summer at RAND, where he works on problems of national defense. Lectures on threats in economics, politics, and war as a kind of bargaining game.
  • 1958-9: Returns to Harvard, gives in March 1959 six public lectures on the same topic at the Lowell Institute in Boston, titled The art of coercion. Two of these were repeated in a Harvard seminar run by Henry Kissinger, who was at that time on the Harvard faculty. It has been said that many of Ellsberg's ideas were later translated into action by Kissinger when he became Secretary of State—as extortionist, rather than as victim.
  • Summer 1959: Ellsberg moves to RAND, where he worked off and on for many years. (It was because of his position there that he had access to the papers that he leaked.)
  • 1961-64: Consultant to the Department of Defense under a contract with RAND.
  • 1962: Research for his Ph.D. thesis in economics at Harvard seems to have been finished quite early, but he didn't finish writing it up until 1962, while at RAND.
  • 1964: Goes to work as special assistant to John McNaughton, Assistant Secretary of Defense (International Security Affairs), who in turn worked under Robert McNamara (known by President Johnson, who inherited him from Kennedy, as 'the man with the Stacomb').

Almost everyone who encountered Ellsberg during this period seems to have been impressed by his intelligence. His writings in this period fall roughly into three categories: (a) his senior thesis, (b) the Lowell lectures and work at RAND, (c) his Ph. D. thesis. The first and last are concerned with a topic only a dedicated economist could love, but they played a role, if only implicit, in all his early work. Ellsberg's notes for the Lowell lectures are extant and available for public reading. Although unfortunately in an unfinished state, they are both amusing and instructive.

Game theory

As far as I can see, Ellsberg was familiar with only the rudiments of game theory, and I do not believe he was interested at all in its more mathematical aspects. Nonetheless, to understand his work, it will help to know those rudiments.

I'll demonstrate how things go by three examples. Each of these is a game with two players, and each player has two possible moves, making four outcomes in all. The moves are made simultaneously, so a player does not know what his opponent's move is when he makes his own. Each outcome has a numerical evaluation, which amounts to a payment to (say) player #1 (whom I'll call "R" for "Row"), which is taken from player #2 (whom I'll call "C" for "Column"). A negative payment is taken to be a positive payment to C. These payoffs are recorded in a $2 \times 2$ matrix.

In the first game, each player writes down either $H$ or $T$ on a slip of paper and puts it upside down. When both slips have been placed, they are turned over. There are four outcomes:

  • Both choose $H$. Player R is paid $\$1$ by C.
  • Both choose $T$. Player C is paid $\$1$ by R.
  • The choices are $HT$ or $TH$. No payments are made.

The payoff matrix is

Matrix with 1 and -1 on the diagonal and 0s on the off-diagonal

If R calls $T$, she can do no better than $0$, and she might have to give a dollar away. If she calls $H$, she can do no worse than $0$, and might get $1$. She should therefore certainly call $H$. Likewise, C should call $T$. This situation is called a saddle point. (The analogy to saddle points in multivariable calculus becomes plain when considering matrix games with more than 2 strategies.) In the paper Theory of the reluctant duelist, based on his undergraduate thesis, Ellsberg complained about the lack of excitement in this reckoning, but it's hard to fault the logic.

The second game is like the first, except the payoff matrix is

Matrix with 1s on the diagonal and -1s on the off-diagonal

Here there is no best play for either player. In fact, each of them might as well toss a coin, and call whatever comes up. This is in effect a randomized or mixed strategy, which is one of the principal contributions of Von Neumann to the theory of games. In real games, some degree of randomization has been common practice for a long time—for example in the familiar strategy of bluffing in games of poker, which must be done unpredictably in order to be effective. Von Neumann made possible, in principle, a quantitative analysis of this phenomenon, and in the book with Morgenstern illustrated this with a simplified analogue of poker.

Incorporating random choices of moves in an encounter is perhaps the main contribution of Von Neumann and Morgenstern to economics. In general, if a player has $n$ choices of moves, a pure strategy is one in which the player chooses one of these, and a mixed strategy is an array of $n$ probabilities according to which the player chooses one of them, say by consulting a random number generator. In this example, the array is $(1/2, 1/2)$ and the random number generator is a coin flip.

The third game will be more complicated, but also a bit more related to practical applications. R and C are at war. R is planning to ship a valuable cargo by airplane from one place to another. She has two planes available to do this, one of them (say #1) better protected from attack but more costly than #2. Both planes will make the trip. C will attack just one of them, with the intention of doing as much damage as possible.

Precise numerical values: If plane #1 is attacked, the probability of destruction is $0.2$. For plane #2 this is $0.4$. The value of the cargo is $\$400$K, that of plane #1 is $\$100$K, that of plane #2 is $\$80$K.

The payoff matrix will display the damage done to R. But this is not straightforward to evaluate. An attack on a plane is not guaranteed to do any damage at all. Insurance companies deal with this sort of thing by using the expected value of damage. This is the average amount of damage that would be likely if a huge number of attacks were made. For example, if 1,000 attacks were made on plane #1, around 200 would be successful. If it were carrying the cargo, each of these would amount to a loss of $100 + 400 = 500$. The total loss would be $200 \cdot \$500 = \$100,000$, and the average would be $\$100$. This is also called the expected loss, and it's what goes in the payoff matrix. Its other entries are calculated similarly.

Matrix with 100 and 32 in the first row and 20 and 192 in the second row

How can this be used to find the mixed strategy to be used by R? We can find expected values of expected values! The recipe for the mixed strategy can be found around p. 71 in J. D. Williams' light-hearted book The Compleat Strategyst. It is graphical in nature.

Suppose C plays strategy #2, but R plays a mixed strategy $(x, 1-x)$. The expected damage will be found as in the diagram on the left below. The expected damage from either of C's strategies can be seen in the middle. The dark path represents the maximal possible damage done by C. R's best mixed strategy $x$ is that corresponding to the minimum of the maximum likely damages. It is the horizontal coordinate $\sigma$ of the intersection of the two line segments. In this example, $\sigma = 17/60$.

The expected value of the damage lies on a line with positive slope The maximum of a line with positive and negative slope is highlighted The choice of sigma is at the intersection of the lines.

Choosing the mixed strategy associated to $\sigma$ will give C an excellent chance of a minimum of damage.

Remark. In these examples, the payoff for one player is the negative of the payoff to the other. These games are said to be zero-sum. (Another commonly used term is strictly competitive.) Few real-life applications are so strict. In general, the payoff matrix should record payoffs to both players, and they need not even be in the same currency. Nor, in real life, is it plausible that payoffs can be evaluated so precisely.

The last example is reminiscent of more interesting RAND reports, for example in the various notes (co)authored by the well known statistician David Blackwell. RAND has posted hundreds of such reports. A number of other mathematicians who became prominent in more conventional mathematical pursuits also worked for a while at RAND, for example Israel Herstein, John Milnor, and John Nash.

RAND

RAND was founded in the late 1940s with the hope that it would let the Air Force call upon scientific advice on how to conduct warfare in the age of nuclear weapons. This would continue a successful collaboration during the Second World War, and was responsible for a lot of research, including some well known textbooks, in game theory. As I count it, 26 people—all men, I am afraid—worked some time at RAND and were later awarded the Nobel Prize in Economics. Among them was John Nash, who has an independent reputation in purer mathematics. Henry Kissinger was also associated with RAND, but has an independent reputation of a different sort. So was Hermann Kahn, on whom Doctor Strangelove is said to have been modeled.

Considering his later activities and the reputation of RAND, it might seem strange that Ellsberg went there. He himself says in some autobiographical comments:

In 1959, I became a strategic analyst at the RAND Corporation under the delusion—acquired as a summer consultant at RAND the previous year, and shared by all my colleagues and most of those who had access to Top Secret intelligence estimates—that a “missile gap” favoring the Soviets made the problem of deterring a Soviet surprise attack the overriding challenge to U.S. and world security. ... I had been drawn to the RAND Corporation because it was in the forefront of the emerging field of “decision theory,” the focus of my academic interests. Once there, I chose to apply my own work on individual decision-making under uncertainty to the most fraught, and possibly final, such decision in human history: the choice by the President of the U.S. or the Soviet premier—or, as I discovered, conceivably by one of their many subordinates—of whether to initiate all-out nuclear war.

Neither Ellsberg nor Schelling was much interested in technical aspects of game theory. What did interest them was that game theory suggested how to break up problems of making decisions into simpler factors. Both found payoff matrices and elementary probabilistic computations useful in explaining such problems. And together they managed to formulate a clear policy for how to be a more successful blackmailer! (Take a look at Schelling's An essay on bargaining and the last few pages of Ellsberg's fourth Lowell lecture.)

The economist's notion of utility

The main contribution of Von Neumann and Morgenstern to economics was the idea of seeing economic interactions as analogues of games. But there was a second, if more obscure, contribution to the theory of utility. This was the topic of both of Ellsberg's theses, undergraduate and graduate.

This topic can be introduced by the question, "What kind of entries are allowed in a payoff matrix?" Sure, the entries could be in monetary terms, but that is not all that common. Nations fight over land, corporations fight over access to markets, people fight over honour and status. Classically, economists dealt with this by pointing out that decisions often reduce to an expression of preference. This only allows one to order items in a sequence, as we shall see in a moment in a rough payoff matrix. But Von Neumann and Morgenstern pointed out that by making some very simple assumptions, one could assign to every object under consideration a numerical utility that allowed one, at least in principle, to deduce preferences. (Apples can be compared to oranges!) What was new in this was the introduction of probability and mathematical expectations in preferences. Given items $A$, $B$, and $C$, one could be asked if one preferred $A$ to a lottery ticket in which $B$ has probability $p$ of occurring and $C$ has probability $1-p$. Of course in the real world comparing apples and oranges can still be a difficult problem, but in recent years the theory of numerical utilities has been incorporated in the applications of game theory to computer networking. Take a look at the book Game theory for wireless engineers.

I bring up this subject because Ellsberg brings it up constantly in his Lowell lectures.

One curious feature of this theory is that utility cannot be directly interpreted as money. This has been known at least since the phenomenon of marginal utility was discovered. A fixed amount of money is of different utility to a rich person and a poor one.

The Lowell lectures

The following is a quote from Ellsberg's first Lowell lecture, The theory and practice of blackmail. It is based in turn on an article in the December 4, 1958 issue of the New York Herald Tribune.

What went through the mind of the bank teller in New York last December, as he read the note that a "little old lady" pushed through his window? "I have acid in a glass," the note said, "and if you don't give me what I want I'll splash it on you." He looked up, saw about ninety customers in the bank, a grey-haired lady in a brown coat before his window, and on his ledge a six-ounce water glass with a colorless liquid in it. He returned to the note, and read: "I have two men in here. I'll throw the acid in your face, and somebody will get shot. Put all the fives, tens, and twenties in this bag." He complied.

To be precise, the teller handed over a paper bag with $\$3,420$ inside. Just after leaving the bank, the woman dropped the paper bag onto the sidewalk. When somebody picked it up and tried to hand it to her, she ran away.

There are ways in which this blackmail attempt is both similar and different from the two-person zero-sum games we saw earlier.

  • The blackmailer first presents her demand, along with a threat if it is not agreed to.
  • The clerk has basically two options (but with variations): (i) he can comply with the demand or (ii) resist it.
  • The blackmailer then has her own choice of options, depending on what the clerk does: (i) If the clerk complies, she simply departs with the money. (ii) Otherwise, she can either carry out her threat or forget about it.

The most evident difference from the earlier games is that moves are not made simultaneously. Another is that payoffs to either participant are not so clear. Nonetheless, we can make up an imprecise payoff matrix:

Accept/resist: 1, Accept/comply: 2, Punish/resist: 3, Punish/comply, 2

The numbers here represent only relative damage to the victim: $[1]$ is less damaging than $[2]$, which in turn is less damaging than $[3]$. Because we are looking at a case of extortion, the payoff matrix is determined: the consequence of complying has to be less damaging than a successful resistance, and the consequence of an unsuccessful resistance has to be the worst outcome of all.

But in order to really understand what's going on, we have to know more. When the bank teller receives the demand, he asks himself two questions: (1) How serious would the damage be if I did not agree to the demand? (2) If I don't agree, will the threat be carried out? Or, more precisely: how probable would it be that the punishment be carried out?

To answer these, we have to assign real numerical evaluations of the consequences, the Von Neumann-Morgenstern utilities. Here is one possibility:

Accept/resist: 0, Accept/comply: 20, Punish/resist: 100, Punish/comply, 20

This matrix, too, is roughly determined—the $0$ and the $100$ just fix the scale of the payments, and the $20$ has to be somewhere in the interval between them. Otherwise we would not be considering an extortion. What is the significance of the $20$?

Ellsberg's main contribution to the subject of extortionate demands is this:

There exists a critical probability $\sigma$ with this property: if the probability that the threat will be carried out is greater than $\sigma$, then the victim should agree to the demand. Otherwise not. This probability can be computed from the Von Neumann-Morgenstern payoff matrix.

Ellsberg calls this the critical risk, but "risk" is a word used by economists (perhaps confusingly for the rest of us) to denote probability. Of course in real life it is impossible to measure all contributions so precisely that this probability can really be computed, but it is useful to have Ellsberg's analysis of what's going on.

How to compute the critical probability? Just as we computed mixed strategies. For the payoff matrix exhibited above, we make up the following diagram:

Two lines intersect when the x-value is 1/5

So the critical probability in this case is $1/5$.

Ellsberg hints at this computation throughout his Lowell lectures, but details can be found in Lecture #4, which happens to be the most technical of all.

We can now understand the blackmailer's basic problem: she wants the victim to pay up without fuss. In practice, there are costs to him of punishing a failure to pay up—for example, penalties can become much more serious. So her goal is to make the victim understand that the probability of being punished for failure to pay is higher than the critical probability. Much of the Lowell lectures is concerned with elaborating on this thread. There is much discussion in particular of Hitler's talent for extortion, and of the use of (possibly simulated) madness to be convincing.

In our example, of course the bank teller paid up after a rather brief consideration. But you'll have to look for yourself at Theory and practice of blackmail to see the very satisfactory surprise ending.

Reading further

Ellsberg's early writings

  • The Ellsberg Project at the University of Massachusetts

    The Project Archivist is Jeremy Smith, whom I wish to thank for finding copies of a number of Ellsberg's papers for me.

  • Theory of the reluctant duelist, American Economic Review, December 1956, pp. 909-923. Reprinted in Bargaining: Formal Theories of Negotiation, edited by Oran R. Young and published by University of Illinois Press, 1975.
  • Classic & Current Notions of Measurable Utility, The Economic Journal, 1954, pp. 528-556

    The two above are extracted from his undergraduate thesis.

  • Risk, Ambiguity, & the Savage Axioms, reprinted November 1961, The Quarterly Journal of Economics, 1961, pp. 644-661.

    This is an extract from an early version of his Ph. D. It introduces what is now called the Ellsberg paradox, which hinges on how a person's attitude to uncertainty influences his decisions, and affects the validity of the Von Neumann-Morgenstern utility theory. This was Ellsberg's permanent contribution to econimics.

  • The published version of the Harvard Ph. D. thesis:

    Risk, ambiguity,and decision

  • The Lowell Lectures

    There is not much mathematics in these, but his analysis of some historical events in the light of his own studies of conflict and threats is not to be missed.

    Lowell Lectures title page

    1. Theory and practice of blackmail
    2. The threat of violence

      The ending, and one page at the beginning, are missing.

    3. An analysis of conflict

      Also known as The crude analysis of strategic choices, a lecture presented at an economics conference.

    4. Power economics
    5. The political uses of madness
    6. The really intelligent detonator

      Page 21 is missing.

  • The doomsday machine.

    One of two large autobiographies. This one covers the threat of nuclear war, in particular the Cuban missile crisis, which was certainly a game of "chicken".

Secondary material

Game theory

  • John von Neumann and Oskar Morgenstern, The theory of games and economic behaviour. Published by the Princeton University Press in several editions.
  • J. D. Williams, The compleat strategyst, published in the RAND series of MacGraw-Hill.

    A very elementary introduction to $2 \times 2$ game theory, and a bit more.

  • Duncan Luce and Howard Raiffa, Games and decisions, Wiley, 1957.

    The standard introduction in the 1950s.

  • Thomas Schelling, The strategy of conflict, Harvard University Press, 1963.

    A classic of economics, which undoubtedly influenced Schelling's Nobel Prize award. His work and Ellsberg's during these years overlap to a considerable extent. Their periods of work at RAND also overlapped, and they undoubtedly talked to each other. Nonetheless, they each agree that they came up with many ideas independently. However, in the long run Schelling applied himself more steadily to problems of conflict and negotiation, and it paid off.

  • The authorized history of RAND

Utility theory

  • Daniel Bernoulli, Exposition of a new theory on the measurement of risk, Econometrica 22 (1954), pp. 23-36.

    Translation from Latin. The origin of marginal utility.

  • Mark Dean, Lecture notes from courses in microeconomics at Columbia University.

    For economists, the notion of utility—allied to that mythical figure the Economic Man—is no joke.

  • Israel Herstein and John Milnor, An axiomatic approach to measurability

    A very clean analysis of a version of the Von Neumann-Morgenstern axioms for utility.

1 Comment

  1. Thanks so much for bringing this aspect of Ellsberg’s work to our attention. His 1959 Lowell Lectures are scarily apt for the present moment, especially if one substitutes the name “Trump” for “Hitler.”

Leave a Reply

Your email address will not be published. Required fields are marked *

HTML tags are not allowed.

47,442 Spambots Blocked by Simple Comments