Some two thousand years ago in ancient Greece, philosophers Aristotle and Zeno asked some interesting thought-provoking questions, including a case of what are today known as Zeno’s paradoxes. The most famous example is a race, known as Achilles and the tortoise. The setup is as follows:
Achilles and a tortoise are having a race, but (in the spirit of fairness) the tortoise is given a headstart. Then Achilles will definitely lose: he can never overtake the tortoise, since he must first reach the point where the tortoise started, so the tortoise must always hold a lead.
Since there are virtually infinitely many such points to be crossed, Achilles should not be able to reach the tortoise in finite time.
This argument is obviously flawed, and to see that we consider the point of view of the tortoise. From the tortoise’s perspective, the problem is equivalent to just Achilles heading towards it at the speed equal to the difference between their speeds in the first version of the problem.
Since $\text{distance} = \text{speed}\times\text{time}$, we can say that after time $t$, Achilles has travelled a distance equal to $v_At$ and the tortoise $v_Tt$. The distance between them is
\begin{equation*}
D-v_At+v_Tt = D – (v_A-v_T)t,
\end{equation*}
and so Achilles catches the tortoise—ie the distance between them is $0$—when the time, $t$, is equal to $D/(v_A-v_T)$.
There is another way to see this problem that satisfies better the purpose of this article and directly tackles the problem Aristotle posed. To get to where the tortoise was at the start of the race, Achilles is going to travel the distance $D$ in time $t_1 = D/v_A$. By that time the tortoise will have travelled a distance equal to
\begin{equation*}
D_1 = v_T t_1,
\end{equation*}
which is the new distance between them.
Travelling this distance will take Achilles time
\begin{equation*}
t_2 = \frac{D_1}{v_A} = \left(\frac{v_T}{v_A}\right)t_1.
\end{equation*}
Then the tortoise will have travelled a distance
\begin{equation*}
D_2 = v_Tt_2 = \left(\frac{v_T^2}{v_A}\right)t_1
\end{equation*}
and Achilles will cover this distance after time $t_3 = D_2/v_A = (v_T/v_A)^2t_1$.
Repeating this process $k$ times we notice that the distance between Achilles and the tortoise is
\begin{align*}
D_k &= v_T\left(\frac{v_T}{v_A}\right)^{k-1}t_1 \\ &= \left(\frac{v_T}{v_A}\right)^kD.
\end{align*}
Summing up all these distances we get how far Achilles has to move before catching the tortoise: if we call this $D_A$ it’s
\begin{equation*}
D_A =
\lim_{n\to\infty}\sum_{k=0}^{n}D_k =\sum_{k=0}^{\infty}D_k = D \sum_{k=0}^{\infty}\left(\frac{v_T}{v_A}\right)^k.
\end{equation*}
This is probably the simplest example of an infinite convergent sum. In particular, this is the simplest example of a class of sums called geometric series, which are sums of the form
\begin{equation*}
\sum_{k=0}^{n}a^k.
\end{equation*}
If $|a| < 1$, the sum tends to $(1-a)^{-1}$ as $n$ tends to $\infty$ and diverges otherwise, meaning that it either goes to $\pm\infty$, or a limiting value just doesn't exist.
By 'doesn't exist', see for example what happens if $a=-1$: we get
\begin{equation*}
1 - 1 + 1 - 1 + 1 - 1 + \cdots + (-1)^n
\end{equation*}
and the sum oscillates between $0$ (if $n$ is odd) and $1$ (if $n$ is even).
In our case, $|a|=|v_T/v_A|<1$ so
\begin{equation*}
D_A = \frac{D}{1-v_T/v_A} = \frac{D v_A}{v_A-v_T},
\end{equation*}
which, when divided by the speed of Achilles, $v_A$, gives exactly the time we found before. So Achilles and the tortoise will meet after Achilles has crossed a distance $D_A$ in time $t_A = D_A/v_A$.
Thousands of years later, Leonhard Euler was thinking about evaluating the limit of
\begin{equation*}
\sum_{k=1}^{n}\frac{1}{k^2} = 1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \cdots + \frac{1}{n^2}
\end{equation*}
as $n\to\infty$, which is named as the Basel problem after Euler’s hometown. This sum is convergent and it equals $\mathrm{\pi}^2/6$, as Euler ended up proving in 1734. He was one of the first people to study formal sums—which we will try to define shortly—and concretely develop the related theory. In his 1760 work De seriebus divergentibus he says
Whenever an infinite series is obtained as the development of some closed
expression, it may be used in mathematical operations as the equivalent of
that expression, even for values of the variable for which the series diverges.
So let’s think about series which diverge. One way a series can diverge is simply by its terms getting bigger. One such example is the sum
\begin{equation*}
\sum_{k=1}^{n}k = 1 + 2 + 3 + 4 + \cdots + n = \frac{n(n+1)}{2},
\end{equation*}
the limit of which when $n\to\infty$ is, of course, infinite.
But now let’s think about the harmonic series,
\begin{equation*}
\sum_{k=1}^{n}\frac{1}{k} = 1 + \frac12 + \frac13 + \frac14 + \cdots + \frac1n.
\end{equation*}
This time, although the terms themselves get smaller and smaller, the series still diverges as $n\to\infty$. But we can still describe the sum and its behaviour. It turns out that
\begin{equation*}
\sum_{k=1}^{n}\frac1k = \ln(n)+\gamma + O(1/n),
\end{equation*}
where $\gamma$ is the Euler–Mascheroni constant, which approximately equals 0.5772, and $O(1/n)$ means ‘something no greater than a constant times $1/n$’. You can see the sum and its approximation here:
Historically, the development of the seemingly unconventional theory of divergent sums has been debatable, with Abel, who at some point made contributions to the area, once describing them as shameful, calling them “an invention of the devil”. Later contributions include works of Ramanujan and Hardy in the 20th century, about which more information can be found in the latter’s book, Divergent Series.
More recently, a video on the YouTube channel Numberphile was published, and attempted to deduce the ‘equality’
\begin{equation*}
1+2+3+4+\cdots=-\frac{1}{12}.
\end{equation*}
This video sparked great controversy, and indicates one of the dangers of dealing with divergent sums. One culprit here is the Riemann zeta function, which is defined for $\operatorname{Re}(s)>1$ as
\begin{equation*}
\zeta(s)=\sum_{k=1}^{\infty}\frac{1}{k^s}.
\end{equation*}
When functions are only defined on certain domains, it is sometimes possible to ‘analytically continue’ them outside of these original domains. Specifically at $-1$, doing so here gives $\zeta(-1)=-1/12$. The other culprit here is matrix summation—another method to give some value to divergent sums. By sheer (though neat) coincidence, these methods, such as the Cesáro summation method they use in the video, also give $-1/12$!
The main problem is this: at this point we no longer have an actual sum in the traditional sense.
Instead, we have a divergent sum which is formal, and by that, we mean that it is a symbol that denotes the addition of some quantities, regardless of whether it is convergent or not: it simply has the form of a sum.
These sums are not just naive mathematical inventions, instead, they show up in science and technology quite frequently and they can give us good approximations as they often emerge from standard manipulations, such as (as we’ll see) integration by parts.
Applications in physics can be found in the areas of quantum field theory and quantum electrodynamics. In fact, formal series derived from perturbation theory can give very accurate measurements of physical phenomena like the Stark effect and the Zeeman effect, which characterise changes in the spectral lines of atoms under the influence of an external magnetic and electric field respectively.
In 1952, Freeman Dyson gave an interesting physical explanation of the divergence of formal series in quantum electrodynamics, explaining it via the stability of the physical system versus the spontaneous, explosive birth of particles in a scenario where the corresponding series that describes it is convergent. Essentially he argues that divergence is, in some sense, inherent in these types of systems otherwise we would have systems in pathological states. His paper from that year in Physical Review contains more information.
Euler’s motivation
Sometimes, such assignments of formal sums to finite values (constants or functions) can be useful. The fact that they sometimes diverge does not make much difference in the end, if certain conditions are met.
An example that follows Euler’s line of thought as described earlier emerges when trying to find an explicit formula for the function
\begin{equation*}
\operatorname{Ei}(x):=\int_{-\infty}^{x}\frac{\mathrm{e}^t}{t}\, \mathrm{d} t,
\end{equation*}
for which repeated integration by parts yields
\begin{align*}
\operatorname{Ei}(x) = \int_{-\infty}^x\frac{\mathrm{e}^t}{t}\, \mathrm{d} t =& \left[\frac{\mathrm{e}^t}{t}\right]_{-\infty}^{x}-\int_{-\infty}^{x}\mathrm{e}^{t}\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{1}{t}\right) \mathrm{d} t \\
=& \left[\frac{\mathrm{e}^x}{x} – 0\right] + \int_{-\infty}^{x}\frac{\mathrm{e}^{t}}{t^2} \mathrm{d} t \\
= \cdots =& \frac{\mathrm{e}^{x}}{x}\sum_{k=0}^{n-1}\frac{k!}{x^{k}}+O\left(\frac{\mathrm{e}^x}{x^{n+1}}\right),
\end{align*}
where we’ve been able to say $\mathrm{e}^x/x\to 0$ on the second line as $x\to-\infty$. Dividing through by $\mathrm{e}^x$, this allows us to say
\begin{align*}
\mathrm{e}^{-x}\operatorname{Ei}(x) &= \sum_{k=0}^{n-1}\frac{k!}{x^{k+1}}+O\left(\frac{1}{x^{n+1}}\right)\\
&\sim\sum_{k=0}^{\infty}\frac{k!}{x^{k+1}}\text{ as }x\to\infty.
\end{align*}
Now swap $x$ for $-1/x$ in this equation:
\begin{align*}
\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)&\sim\sum_{k=0}^{\infty}k!(-x)^{k+1}\text{ as }x\to0\\
&=-x+x^2-2x^3+6x^4+\cdots.
\end{align*}
As you can see below, this series now diverges as $x\to\infty$, but we still see convergence of the partial (truncated) sums as $x\to0$, even as we add more terms:
Euler noticed that $\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)$, in its original integral form, solves the equation
\begin{equation*}
x^2\frac{\mathrm{d}y}{\mathrm{d}x}+y=-x
\end{equation*}
(for $x\neq0$). Now here’s the thing: the formal sum
\begin{equation*}
\sum_{k=0}^{\infty}k!(-x)^{k+1},
\end{equation*}
to which $\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)$ is asymptotic as $x\to0$, also (formally) ‘solves’ the same equation for any $x$.
This solution is not unique, and in fact, adding any constant multiple of $\mathrm{e}^{1/x}$ to $\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)$ would still solve the equation; and the resulting solution would still be asymptotic to the same formal sum.
However, the coefficients of the powers of $x$ are unique. So there may be something in the formal sum that can give away the actual solution of the equation (which is often difficult to find via standard methods—unlike formal solutions that are easier to compute like the one above), at least up to some class of solutions and under certain conditions. In fact, this seems to actually be the case, at least for certain classes of formal sums—the ones that attain ‘at most’ factorial over power rate of divergence.
Solving a differential equation
To elaborate further, let’s consider one more example, the differential equation
\begin{equation}
-\frac{\mathrm{d}y}{\mathrm{d}x}+y = \frac{1}{x}, \quad \text{where } y(x)\to 0 \text{ as }x \to \infty.
\label{fs1}
\tag{*}
\end{equation}
Thinking about the boundary condition there, we could substitute in the (formal) sum of powers of $x$ which decay away as $x\to\infty$,
\begin{equation*}
y(x) = \sum_{k=0}^\infty a_k x^{-k-1} = \frac{a_0}{x} + \frac{a_1}{x^2} + \frac{a_2}{x^3} + \cdots.
\end{equation*}
Doing so, we get
\begin{align*}
-\frac{\mathrm{d}}{\mathrm{d}x}\left[\sum_{k=0}^{\infty}a_kx^{-k-1}\right]+\sum_{k=0}^{\infty}a_kx^{-k-1} &= \frac{1}{x}
\\ \implies \sum_{k=0}^{\infty}(k+1)a_kx^{-k-2}+\sum_{k=0}^{\infty}a_kx^{-k-1} &=
\frac{1}{x} \\
\implies a_0x^{-1}+\sum_{k=0}^{\infty}\big[(k+1)a_k+a_{k+1}\big] x^{-k-2} &=\frac{1}{x}.
\end{align*}
Then for our differential equation to be satisfied the coefficients have to satisfy
\begin{equation*}
a_0=1 \qquad \text{and}
\end{equation*}
\begin{equation*}
(k+1)a_k+a_{k+1}=0\implies a_{k+1} = -(k+1)a_k
\end{equation*}
which recursively means that $a_k=(-1)^kk!$ and our formal sum solution is
\begin{equation*}
y(x) = \sum_{k=0}^{\infty}(-1)^k k!x^{-k-1}.
\end{equation*}
Now that we have a sum that solves the equation formally, we can obtain an actual solution assuming that it is asymptotic to the sum we found as $x\to\infty$ by using the repeated integration by parts result
\begin{equation*}
\int_{0}^{\infty}\mathrm{e}^{-xs} s^k \,\mathrm{d} s = k! x^{-k-1} \text{ for }x>0,
\end{equation*}
which implies
\begin{align*}
y(x) &= \sum_{k=0}^{\infty}(-1)^{k}k!x^{-k-1} \\
&= \sum_{k=0}^{\infty}(-1)^k\int_{0}^{\infty}\mathrm{e}^{-xs}s^k \,\mathrm{d} s \\
&= \int_{0}^{\infty}\mathrm{e}^{-xs}\sum_{k=0}^{\infty}(-1)^ks^k \,\mathrm{d} s.
\end{align*}
How is that helpful? Well, for $s:|s|<1$ we know that
\begin{equation*}
\sum_{k=0}^{\infty}(-1)^ks^{k} = 1-s+s^2+\cdots = \frac{1}{1+s},
\end{equation*}
by the formula for geometric series for $a=-s$ from our discussion of Achilles and the tortoise.
This is a nice function on the real line, having all the fine properties that we need in order to define
\begin{equation*}
y(x)=\int_{0}^{\infty}\frac{\mathrm{e}^{-xs}}{1+s}\,\mathrm{d}s,
\end{equation*}
which is the solution to our differential equation, \eqref{fs1}, we are looking for, and is also asymptotic to the formal sum $\sum_{k=0}^{\infty}(-1)^k k!x^{-k-1}$ as $x\to\infty$:
Notice that any linear combination of these formal sums will result from the same linear combination of the respective convergent (for $s:|s|<1$) series $1-s+s^2-s^3+\cdots$ inside the integral. In conclusion, it is possible to obtain a solution in closed form to a differential equation just by finding a formal power series to which the solution is asymptotic.
Not just reinventing the wheel
The aforementioned example is, of course, quite simple and trying to find a solution in the way we just described might look like we’re reinventing the wheel using modern-era technology. However, the true potential of the method described above can be seen in nonlinear equations, to which we generally cannot find solutions in standard ways. In my own research I used formal sums to study an equation with applications in fluid mechanics.
In one of the first talks I gave about this topic, I remember noticing several of my peers tilting their heads in distrust when I mentioned that the emerging sums are divergent. This reaction was almost expected and for obvious reasons. It took an hour-long talk and several questions later to convince them that the mathematics involved is genuine.
Controversial as it may sound, at first sight, this concept is even more realistic than imaginary numbers, which are simply symbols with properties that we just accept and use. The idea is that, although imaginary, these numbers can demonstrably give us, when interpreted properly, very real results such as solutions to differential equations like
\begin{equation*}
\frac{\mathrm{d}^2y}{\mathrm{d}x^2}+y=0.
\end{equation*}
The same is true for formal sums too.
Why do we assign actual numbers to formal sums in the first place? Because they are sometimes easier to work with and can lead to interesting results (such as solutions to differential equations) if interpreted properly. The underlying mechanisms should be well-defined mathematical processes and well-understood in order to avoid any serious mistakes when working with such sums. An example of erroneous use of such sums is Henri Poincaré’s attempt to solve the three-body problem in order to win the King Oscar prize in 1889. He managed, however, in the next decade to spark the development of chaos theory. But that’s for another time.