Looking for structure comes naturally for human beings. Gazing over a landscape and identifying patterns; reading poems and feeling where the beat lies; staring at paintings and figuring out the stories being told. Structure both guides us and reassures us.

Structure lies at the core of mathematics. We look for abstract patterns and scaffolding in all sorts of mathematical phenomena. Counting is no exception in this hunt. When we are in elementary school, we learn rules for computing, for example, $3 + 10$, or $10 – 4$, or $1284 \times 12$. We learn that if we want to compute $(3+10)\times 4$, we can just compute $3 \times 4 + 10 \times 4$. Later, in maths at university, these kinds of properties are baked into the abstract definition of a ring. A ring is a set $R$ together with two operations: $+$ and $\times$, with special properties (which we’ll get to later) and two distinguished elements: $0$ and $1$, which are the identities of these operations. A simple example of a ring is the integers $\mathbb{Z}$, where $+$ and $\times$ are precisely the elementary school operations we’ve known for years.

To the eyes of a certain type of mathematician, however, the integers are not the wholesome and friendly object that they might seem at first.

## Are we secretly linguists?

Model theorists, who live and work in the realms of mathematical logic, see mathematics as a language to be understood through the lenses of mathematics itself. While this might seem strange at first, it is not any stranger than linguists analysing English through the lenses of English itself. The bread and butter of mathematical research is proofs, which can be modelled mathematically in the same way that a fluid flowing can be modelled, the flight of a bird can be modelled, the magnetic field emanating from my Christmas lights can be modelled.

When we say that a theorem is true, the model theorists argue, what we are really doing is playing a game. We have some rules—which mathematicians call ‘axioms’—and we reason to deduce which moves are allowed. If you have ever played Dungeons & Dragons, you may have realised that the rulebooks don’t cover *all possibilities*. Some reasoning is left to the players’ deduction (often wreaking havoc on friendships and destroying years-long relationships, not unlike mathematical research). Starting from the rules that are written down (the *axioms*), those who approach the game can try and argue that certain moves (the *theorems*) are possible and allowed.

What does linguistics, and logic by extension, have to do with this? Any language needs an alphabet: a set of symbols with which we can write down our phrases and statements. If we want to talk about apples, we better have the letters necessary to build the word *apple* in our alphabet. So, what do we want to talk about in mathematics?

Let’s consider rings again. We have already seen the ring of integers $\mathbb{Z}$, but one can also be a bit more creative, and come up with all sorts of rings. For example, $2 \times 2$ matrices with real entries form a ring $M_2(\mathbb{R})$, which is radically different from $\mathbb{Z}$: the multiplication of matrices is not commutative! For example,

\begin{align*}

\begin{pmatrix}

2 & 1 \\

1 & 1

\end{pmatrix}

\times

\begin{pmatrix}

1 & 2 \\

1 & 1

\end{pmatrix}

=

\begin{pmatrix}

3 & 5 \\

2 & 3

\end{pmatrix},

\\

\begin{pmatrix}

1 & 2 \\

1 & 1 \\

\end{pmatrix}

\times

\begin{pmatrix}

2 & 1 \\

1 & 1 \\

\end{pmatrix}

=

\begin{pmatrix}

4 & 3 \\

3 & 2 \\

\end{pmatrix}.

\end{align*}

If we want to understand rings like model theorists do, then, we need to have symbols to talk about its operations and identities: a symbol $+$ for addition, a symbol $\times$ for multiplication, a symbol $0$ and a symbol $1$. To express mathematical statements, we will need to expand our alphabet a little bit. Model theorists speak a language called *first-order logic*, where alphabets are assumed to also contain variables ($x,y,z,\dots$), connectives ($\land$, which represents *and*; $\lor$, which represents *or*; $\Rightarrow$, which represents *then*; $\neg$, which represents *not*), and quantifiers ($\forall$, which represents *for all*; $\exists$, which represents *there exists*) and a symbol for equality, $=$.

We now have a rather large alphabet. Let’s try to express some mathematical fact. Suppose we want to state *multiplication $\times$ is a commutative operation*. What this means is that whenever I pick two elements $a$ and $b$, $a\times b$ is the same as $b\times a$. In first-order logic, this looks like this:

\[

\forall a \,\, \forall b \, (a\times b = b\times a).

\]

As another example, suppose we want to state that all polynomials of degree $2$ have a root. This means that no matter how we choose coefficients $a, b, c$, there is a root of the polynomial $ax^2+bx+c$. In first-order logic, this looks like this:

\[

\forall a \,\, \forall b \,\, \forall c \,\, \exists x \, (ax^2+bx+c=0).

\]

First-order logic, together with the language for rings that we have chosen above, allows us to express many interesting properties of rings. Certain properties will be true in certain rings, and not in others: for example, the operation $\times$ is going to be commutative in the integers $\mathbb Z$, but not in $M_2(\mathbb R)$. We can collect all the statements that are true in a certain ring $R$ in the *theory* of that ring, denoted $\operatorname{Th}(R)$. The sentence $\forall x \,\, \forall y \, (x\times y = y\times x)$ is an element of the set $\operatorname{Th}(\mathbb Z)$, but not of the set $\operatorname{Th}(M_2(\mathbb R))$.

While first-order statements can capture many of the important facts about a mathematical object, in this case a ring, they do not typically capture *everything*. We think of two rings $(R_1,+_1,\times_1)$ and $(R_2,+_2,\times_2)$ as being *the same* if they are isomorphic, in other words if there exists a bijection $f\colon R_1 \to R_2$ which behaves well with operations, ie

- $\forall x, y \in R_1$, $f(x+_1y) = f(x)+_2f(y)$,
- $\forall x,y \in R_1$, $f(x\times_1 y) = f(x)\times_2 f(y)$.

Sometimes rings with the same theory are isomorphic: this is the case if we look at the complex numbers $\mathbb{C}$ and their theory $\operatorname{Th}(\mathbb{C})$. If another ring has the same theory (and the same cardinality, since we need a bijection), then it is isomorphic to $\mathbb{C}$. In a way, the theory knows everything there is to know. This does not hold true for all theories across different kinds of mathematical object. For example, there are fields that share the same theory as $\mathbb R$, but are not isomorphic to it, since they admit infinite elements (namely, the so-called *Robinson hyperreals*).

## Robinson hyperreals

Hyperreal numbers are a rigorous formulation of infinitesimal numbers. A number ${\varepsilon \in \mathbb{R}^*}

$ is *infinitesimal* if it is smaller than every positive real number and larger than every negative real. An *infinite* number is any element of ${\mathbb{R}^*}$ that is either greater than or less than every real number.

The set of hyperreals is denoted as ${\mathbb{R}^*}$. This is an extended set of the real numbers as it includes infinite numbers and infinitesimals. Robinson showed that they could be rigorously defined using model theory. Hyperreals are used in non-standard analysis.

There are many reasons why $\operatorname{Th}(\mathbb{R})$ and $\operatorname{Th}(\mathbb{C})$ are very different in their behaviour, but there is a deep underlying one: $\mathbb R$ can define an ordering (ie, an element is non-negative if and only if it is a square), while $\mathbb C$ cannot. In this sense, $\mathbb C$ is less ‘complicated’, as it showcases less structures and patterns than $\mathbb R$.

The upshot is: the more complicated a ring $R$ is, the less its theory $\operatorname{Th}(R)$ will know about it. This leads us to a situation where rings with the same theory might not be isomorphic. Especially if the rings are complicated, containing many patterns and structures.

## The hunt for forbidden patterns

Enter Shelah. Saharon Shelah was born in 1945, and graduated Tel Aviv University with a maths degree in 1964. He was awarded his PhD at the Hebrew University of Jerusalem in 1969, for research focusing on stable theories.

The story of modern model theory begins in the 1960s, when Shelah proposed a method to map out theories based on how complicated they are, building a tentative map of the universe of mathematical theories.

In the case of rings, we think of $\mathbb{C}$ as being less complicated than $\mathbb{R}$, since the theory of $\mathbb{C}$ describes it very well (even up to isomorphism), whereas the theory of $\mathbb{R}$ does not.

Among all rings, some more complicated, some less, there is one that sits at the borders of this mapping, model theorists’ own *hic sunt leones*: the ring of integers $\mathbb Z$.

Long-time fans of true crime know that, whenever some deeply unsettling truth about somebody is unearthed, there will always be some neighbour commenting “Oh, but they were so kind… so polite.” No matter if the person in question has committed several gruesome murders, there is a high chance that those who knew them on a daily basis will say that they would have never expected it. While the community might disagree on which objects are easy and which ones aren’t, there is a structure that nobody would expect to create trouble: $\mathbb{Z}$, the ring of integers. After all, Kronecker wrote “God created the integers.” Model theorists, however, are able to see beyond the facade of this seemingly harmless mathematical structure. To understand why, we have to go back some 40 years before Shelah’s work.

The breaking point of the idyllic picture of the integers is a procedure known as Gödelisation. The representation was introduced, as the name suggests, by Kurt Gödel in an effort to prove his famous *incompleteness* theorems. If you think back to the games metaphor, Gödel wanted to argue that if a game was *complicated* enough, then there would always be a move that is neither allowed nor prohibited by the rules of the game.

A set of axioms is considered ‘effective’ if it can be listed by an algorithm. In the 1930s, Gödel proved that if an effective set of axioms was able to perform the arithmetic of the integers $\mathbb{Z}$, then it would not be able to prove or disprove all possible statements about $\mathbb{Z}$. By ‘able to perform the arithmetic’, we mean that it must be able to reconstruct within itself the operations $+$ and $\times$ of $\mathbb{Z}$. While his result is impressive in itself, our focus today is rather on his technique. To prove his theorems, he created a translation procedure that transformed first-order statements in the language of rings (like $2=2$, or $n+1 = m+1 \Rightarrow n = m$, or ‘every polynomial of degree $2$ has a root’) into (very big) natural numbers. For a given first-order statement, this big number is known as its *Gödel number*.For instance, the first-order statement $x = y \Rightarrow y = x$ is encoded with the natural number $120061121032061062032121061120$.

The encoding was done in a way such that whether a statement was true or not (whether there was a proof of it or not) was a matter of the elementary arithmetic properties of its Gödel number. This means that the truth of a certain statement can be checked by verifying arithmetical identities—just like we used to do in elementary school. Think of questions like ‘is $5 \times 3 = 15$?’, or ‘is $1289$ divisible by $29$?’. This seems easy, perhaps; but try it with dozens and dozens of digits, and soon enough you’ll either grow tired or ask a computer to do it for you (or both).

Gödelisation allows us to encode *entire mathematical objects* and theories inside the integers. Complicated theorems, emerging from all over mathematics, reduce to simply checking whether two integers divide each other—a procedure that a computer can do, given perhaps infinite time. The mathematical fact that the polynomial $X^7+X^2+3$ has a solution in $\mathbb R$ can be translated into an equality between (very big) integers, which we can compute in $\mathbb Z$.

This is a remarkable fact. Many mathematical statements could be checked by a computer performing basic arithmetic, only with numbers which have billions and billions of digits. It is also a massive problem for the theory of $\mathbb{Z}$. Gödel’s encoding means that the seemingly elementary operations of $\mathbb Z$ can understand patterns which might very well be infinitely complex, even beyond our imagination and comprehension. No matter how hard we try, $\mathbb{Z}$ will always be too complicated for us to understand with first-order logic.

There is a fine line between *complex enough to be interesting* and *too complex to be dealt with*. The integers sport an impossible, almost cosmic level of expressivity. To many mathematicians, $\mathbb Z$ is the simplest object we can think of; they are, after all, not too far from just putting one pebble after the other. They provide the base to many of our grandiose castles of ideas and theories, the foundations of many explorations into unknown mathematical lands. To model theorists, however, they reveal a different face. A twisted expression, a creepy smile. The source of all darkness, the original Pandora’s box. We don’t talk about them, hoping that in our cautious adventures we will never find ourselves alone with the ring of integers in a dimly lit alley.