Suppose this blog’s editor Ursula and I want to create a shared secret to use as a password to access this blog post...

Lattices, Plane and Not-So-Plane

Courtney Gibbons
Hamilton College

In January 2023, Bill Casselman wrote a great column introducing readers to lattices that imagined them as vectors in the plane and described some cryptographic applications. This got me thinking about a lovely little paper I read from the 1950s that summarized some of the then-new developments in semigroup theory involving a completely different algebraic object named – you guessed it! – a lattice. [1] And, to bring us full circle, the reason I was reading about lattice ordered semigroups was because they offer some interesting possibilities for cryptology, some of which I’ll sketch at the end.

For me, a lattice is a partially ordered set satisfying some extra conditions. For example, in the sketch below, some pairs of vegetables are related by "higher than" or "lower than", while others can't be compared. That's a partial order!

Four lettuce heads ordered in a lattice
A little lettuce-ordered math joke.

What about semigroups? I want to start with a little example to keep us company. (If you're more comfortable with formal definitions, you'll find one in the appendix.) Let $B = \{0,1\}$ and equip it with operations $+$ and $\cdot$ as below:

Logical "or" and logical "and" give rise to two operation tables.
Logical "or" and logical "and" give rise to these operation tables.

These operations may seem familiar if you have seen the truth tables for the logical "or" and the logical “and” – indeed, if you think of $0$ as false and $1$ as true, addition is "or" and multiplication is "and."

Put on your algebra hat for a moment and consider the following questions:

  1. Are these operations closed? That is, if you add two things in $B$, do you land back in $B$? Same question for multiplication!
  2. Are these operations associative? I.e., does it matter where you put parentheses when adding three or more things? Same question for multiplication!
  3. Are these operations commutative? I.e., does order matter?
  4. Is there an additive identity? I.e., is there an element $z$ for which $z + a = a + z= a$ for all $a \in B$?
  5. Is there a multiplicative identity? I.e., is there an element $e$ for which $e \cdot a = a \cdot e = a$ for all $a \in B$?

The answers to all of these questions are "yes" and in particular, the additive identity is $0$ and the multiplicative identity is $1$. Here are two questions for which the answer is "no":

  1. Since $1$ is not the additive identity, we can ask if $1$ has an additive inverse: is there an element $a \in B$ for which $a + 1 = 1 + a = 0$?
  2. Since $0$ is not the multiplicative identity, we might hope that it has a multiplicative inverse. Is there an element $a \in B$ for which $a \cdot 0 = 0 \cdot a = 1$?

Examining the addition table, you’ll see the answer is that $1$ does not have an additive inverse. And $0$ doesn’t have a multiplicative inverse.

Now that we’ve collected these properties, we can call $(B,+)$ what it is: an abelian semigroup with identity (or an abelian monoid). That is, $+$ is an associative, commutative binary operation on $B$ and $0$ is the identity. (Similarly, $(B,\cdot)$ is an abelian semigroup with identity.)

But that’s not the main attraction. Let’s consider matrices with entries from $B$ where matrix operations are defined with $+$ and $\cdot$ as above. For instance, matrix addition is a little different:

$$\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 1+1 \\ 1+0 \\ 0+ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}$$

and the dot product looks a little different:

$$\begin{bmatrix} 1 \\ 1 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 1\cdot 1 + 1 \cdot 1 = 1 + 1 = 1$$

and similarly, matrix multiplication looks different! When we get down to the entries of the resulting matrix, our operations are happening in $B$ so multiplication by $1$ and $0$ act as expected, and so does adding $0$. But whenever we try to add $1$ and $1$, we get $1$ again.

Now, the set $X$ of $n \times n$ matrices with entries from $B$ is also a semigroup with identity, but it’s no longer abelian (here's some homework: come up with an example to show that matrix multiplication need not commute).

And in [2], Luce collects some other matrix operations that we can do with these matrices, which he attributes to Birkhoff: intersection, union, complementation, and inclusion. For the purposes of this column, we’re interested in inclusion because it makes $X$ something called a lattice-ordered semigroup.

an image summarizing union, intersection, complementation, and inclusion as matrix operations in a semigroup of square Boolean matrics
Examples of the matrix operations collected by Luce in [2].
If you assert that $0 < 1$, then you can do entry-by-entry comparisons for two $n \times n$ matrices. We can say $A$ is included in $B$ and write $A \leq B$ provided that $A_{i,j} \leq B_{i,j}$ for every entry. (If we know the matrices are not equal, we’ll use $<$; otherwise we’ll hedge our bets with $\leq$.)

We see, for example,
$ \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} < \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} < \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} < \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} < \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$.

But not every matrix is directly comparable: neither $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ nor $ \begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}$ can be related to the other by $<$.

If you draw a picture of the scenario for $2 \times 2$ matrices, you get something like this:

The lattice of 2 by 2 Boolean matrices ordered by inclusion
a lattice of the elements of the semigroup; an element $A$ with a line drawn upwards to another element $B$ captures the relationship $A \lneq B$

There are books aplenty about lattice-ordered (semi)groups, but one of the reasons I looked at them initially is because one of my research students showed me a paper [3] in which the authors suggest reimagining the Diffie-Hellman key exchange protocol using semigroup actions.

Example 1.
Traditional Diffie-Hellman key exchange works like this: Suppose this blog’s editor Ursula and I want to create a shared secret (or "key") to use as a password to access this blog post. We can do this using a combination of public and private information.

First, we agree on a positive integer that will serve as a base for exponentiation later. Let’s call the base $x$.

In private, Ursula picks a positive integer, $a$ and computes $u = x^a$. Meanwhile, and also in private, I pick a positive integer, $b$. Ursula sends me $u$ and I calculate $u^b$.

You may have already guessed that I send $c = x^b$ to Ursula ($c$ for Courtney!).

Now, our shared secret is $s = u^b = (x^a)^b = x^{ab} = x^{ba} = (x^b)^a = c^a$ thanks to the commutativity of exponentiation. And if $x$, $a$, and $b$ are biggish, it’s really hard to figure out what $h$ is because it requires solving something called the discrete logarithm problem (DLP): What is $\log_x u$? If you weren’t privy to Ursula’s thought process, you have to crank this out using whatever algorithms are available to you for computing logarithms.

Example 1.5.
Ursula and Courtney agree that $g = 7$. You observe $u = 117649$ and $c = 5764801$. One (very slow!) algorithm you might use is repeated division by $7$ and discover, after a few long divisions by hand, that $a = 6$.

Now you have the same information Ursula does, so you can work out that $h = 5764801^6$. This turns out to be quite large, with 40ish digits. [4]

Anyway, you can imagine that this algorithm does not scale well. Neither do the other traditional computing algorithms (although there are weaknesses in the implementation of Diffie-Hellman key exchange!) [5] In fact, to tie back to Bill Casselman's column, experts at agencies like NIST recommend using elliptic curves instead of “mod-p” Diffie-Hellman.

But there’s another possible implementation that you can do with semigroups. The revised key exchange protocol with semigroups looks something like this:

Example 2.
Ursula and Courtney agree ahead of time on an element $x$ in a finite set $S$ (think column vectors) and a semigroup $(X,*)$ (think square matrices) that acts on $S$ (think matrix multiplication). The semigroup $X$ should be special: it should have an identity and it shouldn’t be a group – in fact, it shouldn’t even be able to be embedded in a group (meaning, you can’t easily “fix” the problem of inverses by inventing them if they don’t exist).

In this situation, Ursula picks an element $a$ that commutes with every other element of $X$, and calculates $u = ax$. Ursula sends this new element to Courtney who has chosen her own element $b$ that commutes with everything in $X$. Their shared secret is $b (a x) = (b *a) x = (a*b) x = a (b x)$ (which gives you an idea of how Ursula computes their shared secret). Notice you need to pick elements that commute with every other element (i.e., pick elements in the center of $G$) to ensure that $b*a = a*b$, or else you’ll only end up with a shared secret if you accidentally pick elements that commute with each other.

The security of this implementation depends on the difficulty of calculating orbits of elements under the semigroup action. To my knowledge, this hasn’t been studied too much – and that’s how I got interested in these Boolean matrices as examples of lattice-ordered semigroups!

Appendix

For the reader who finds definitions comforting, here are the definitions of a semigroup and a semigroup action:

A semigroup $(G,*)$ is a set $G$ with a binary, associative operation $*$. It may have an identity element for the operation $*$, in which case you can call it a monoid. If you have a monoid and every element also has $*$-inverses, you have a group. If every element in your semigroup/monoid/group commutes, you can tack on the adjective abelian. [Mandatory bad joke: what’s purple and commutes? An abelian grape! Even worse joke: What’s purple, commutes, and is worshipped by 11 people? A finitely venerated abelian grape!]

The center of a semigroup/monoid/group is the set of all elements that commute with all other elements (that is, it’s the set $Z(G) = \{z \in G \, : \, z * a = a*z \, \forall \, a \in G\}$.).

A semigroup action of $(G,*)$ on a set $S$ is a map $\odot: G \times S \to S$ where $g \odot (h \odot s) = (g*h) \odot s$ (and if $G$ has an identity element, $e$, we insist $e \odot s= s$).

References and End Notes

  1. With apologies to David A. Singer, who has one of the best punny titles you could hope for in a textbook.
  2. Duncan R. Luce, A Note on Boolean Matrix Theory, Proceedings of the American Mathematical Society, 3, 1952, pp. 382-388. This is a nice accessible and short paper if you’re looking for something fun to read and have an introduction to linear algebra under your belt.
  3. Maze, Monico, and Rosenthal, Public Key Cryptography Based on Semigroup Actions, Advances in Mathematics of Communications Vol. 1, No. 4, 2007
  4. Indeed, $h = 36703368217294127796651824617432373264384$. If you have a kiddo in your life who likes to know how to “say” large numbers, you can impress them with this one: “36 duodecillion 703 undecillion 368 decillion 217 nonillion 294 octillion 127 septillion 796 sextillion 651 quintillion 824 quadrillion 617 trillion 432 billion 373 million 264 thousand 384” – and then use your new clout to convince them that place value is what matters, not names!
  5. David Adrian, Karthikeyan Bhargavan, Zakir Durumeric, Pierrick Gaudry, Matthew Green, J. Alex Halderman, Nadia Heninger, Drew Springall, Emmanuel Thomé, Luke Valenta, Benjamin VanderSloot, Eric Wustrow, Santiago Zanella-Béguelin, and Paul Zimmermann, Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice, 22nd ACM Conference on Computer and Communications Security (CCS ’15), Denver, CO, October 2015. Even if you aren’t an expert, this is an accessible read!

Leave a Reply

Your email address will not be published. Required fields are marked *

HTML tags are not allowed.

55,759 Spambots Blocked by Simple Comments