The mathematical con artist

Some summations seem strangely slippery…

post

Image: author's.

You’re walking down the street one fine morning, and you see that someone has set up a little table and gathered a small crowd. It’s that sneaky trickster Kyle again. His inspired variants on the shell game have brought him something of a following amongst the local mathematicians. Today, though, he’s got something else entirely. Stepping closer, you see that he’s written out the alternating harmonic series…
$$
1 – \frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6} \dots
$$ You hear his patter. “This, ladies and gentlemen, is a perfectly ordinary convergent series. Now, I will pay one hundred pounds — no tricks — to anyone who can tell me to what, exactly, this series converges. Don’t think I’m trying to mess you around with a divergent series. If any one of you fine fellows can show that it sums to infinity, they will receive the one hundred pounds just the same as if they showed that it converged. Just five pounds a guess. Who’s first?”

Is Kyle finally going straight, or has he got something up his sleeve?

Some definitions

Before we can determine whether to give Kyle our money, it’s worth reminding ourselves of what, exactly, a series is. A sequence is a list of numbers in some order, usually written $a_1, a_2, a_3 \dots$ For example, we might have $a_1=1, a_2=2, a_3=3 \dots$, which we can write more succinctly as $a_n=n$. One sequence which we’re going to care a lot about is the harmonic sequence, $a_n=1/n$. The first few terms of this sequence are $1, 1/2, 1/3 \dots$ We’ve also got to know about the alternating harmonic sequence:
$$
a_n= (-1)^{n+1}/n.
$$ This starts out with
$$
1, -1/2, 1/3, -1/4 \dots
$$
Great. A series is nothing more than a sequence with plus signs in the middle. The sequence $a_n=n$ gives us the series $1+2+3+4+5 \dots$ Some series are called convergent. That means that if you sum all of the infinitely many terms $a_n$, you get a nicely defined, finite, number. Obviously, $1+2+3+4+5 \dots$ is not convergent; we say it is divergent. One example of a convergent series is that generated by the sequence $a_n= 1 / 2^n $. This is
$$
\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16} \dots
$$ Play around with this for a while; you should see that as you keep adding more terms it gets closer and closer to 1. We say that the sequence converges to 1.

The last thing to mention is the idea of a partial sum. The $n$-th partial sum of a series is just the number you get if you add up the first $n$ parts of it. The fifth partial sum of the $a_n=n$ series is $1+2+3+4+5=15$. The partial sums of the series $a_n=1/n$ are very special. They’re called the harmonic numbers. We write them as $H_n=1+1/2 + 1/3 + \dots 1/n$.

The harmonic series

With that taken care of, let’s go back to what Kyle was doing. His series looks very complicated, and just trying the first few partial sums doesn’t get us anywhere. They go $1, 1/2, 5/6, 7/12, 47/60 \dots $ and there doesn’t seem to be a pattern. So, let’s make things a bit easier on ourselves and consider what you get when you take out the minus signs. This is just the harmonic series.

The harmonic series is so called since it was first explored in the context of vibrating strings. Groovy, baby. Image: Flickr user Michael Muller, CC BY-NC-SA 2.0.

Now, we want to know how the harmonic series behaves. To start with, I think we can all agree that the series $1/2 + 1/2 + 1/2 + 1/2 + \dots$ diverges. We need one piece of logic for this, and that is that if a series has every term greater than a divergent series, that series must also diverge. This follows quite easily if you think about it: divergent series grow to infinity, so if you have a series that grows even faster than a divergent series, it must also grow to infinity and be divergent.

Let’s try to sum the harmonic series. We start with $1$. Then we add $1/2$. Then we add not one but two terms: $1/3 + 1/4$. Then we add four: $1/5 + 1/6 + 1/7 + 1/8$. We keep going like this to infinity. We’ve now broken the harmonic series down into groups of first one, then two, then four, then eight, fractions, and so on. Here’s the trick: every fraction in the group of length $2^n$ is greater than or equal to $1/2^{n+1}$. Both $1/3$ and $1/4$ are greater than or equal to $1/2^{1 + 1} = 1/4$. All of the fractions in the group of four are greater than or equal to the fraction $1/2^3 = 1/8$. In general, the harmonic series is divided into groups of $2^n$ fractions, all of which are greater than or equal to $1/2^{n+1}$.

The next thing to do is to find a lower bound for the value of the sum of each group. If you have $k$ numbers each greater than or equal to $p$, their sum must be greater than or equal to $k \times p$. So, the sum of each group must be greater than or equal to
$$
2n \times \frac{1}{2^{n+1}} = \frac{1}{2}
$$ This means that the harmonic series can be rewritten as $a_1+a_2+a_3+a_4 \dots$, where each $a$ is greater than or equal to $1/2$. Hold on a minute. The series $1/2 + 1/2 + 1/2 + \dots$ diverges, and the harmonic series is always greater than or equal to $1/2 + 1/2 + 1/2 + \dots$, so it must diverge as well.

The alternating harmonic series

The alternating harmonic series is a different kettle of fish altogether. It’s what we call conditionally convergent, that is, it only converges if you leave the minus signs in. As we’ve just shown, $1+1/2 + 1/3 + \dots$ goes off to infinity, but $1- 1/2 + 1/3 – 1/4 + \dots$ doesn’t. As a matter of fact, it converges to $\ln{2}\approx0.693$. The easiest proof of this uses enough calculus that I’m not going to go into it here, but you can look it up quite easily online if you understand MacLaurin series.

The partial sums of the alternating harmonic series (black lines) showing convergence to $\ln(2)$ (red line). Image: public domain.

Is Kyle off the hook? Not quite. One of the important things about a convergent series is that, for it to make any sense, it must converge no matter which order you add the terms in. Normal addition does this (for example $1+2+3=3+2+1$) so infinite addition should do it as well. With that in mind, let’s try rearranging the series a little bit. Instead of adding one negative term after each positive, we’ll add two, to get
$$
1-\frac{1}{2}-\frac{1}{4}+\frac{1}{3}-\frac{1}{6}-\frac{1}{8}+\frac{1}{5}-\frac{1}{10}-\frac{1}{12} \dots
$$. This obviously contains all the terms in the original series, with their original signs, so unless Kyle is up to something in a big way it should converge to the same number. Let’s see if it does. Clean it up a bit by subtracting the the first negative terms of each pair from their corresponding positive term, to get $$
(1-\frac{1}{2})-\frac{1}{4}+(\frac{1}{3}-\frac{1}{6})-\frac{1}{8}+(\frac{1}{5}-\frac{1}{10})-\frac{1}{12} \dots
$$ This simplifies to
$$
\frac{1}{2}-\frac{1}{4}+\frac{1}{6}-\frac{1}{8}+\frac{1}{10}-\frac{1}{12} \dots =\frac{1}{2}(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}…) =\frac{1}{2}\ln{2}.
$$

We can’t win. If we say it converges to anything but $\ln{2}$, Kyle will tell us we’re wrong. If we say it converges to $\ln{2}$, he’ll just show us this proof (which is perfectly valid, by the way). He is quite clearly up to his old tricks!

The Riemann rearrangement theorem

The alternating harmonic series isn’t the only series Kyle could do this with. Riemann’s theorem on conditionally convergent series tells us that we can rearrange any conditionally convergent series to converge to anything we like. To prove it, divide the conditionally convergent series $S$ up into two other series: $S^+$, containing all positive terms of $S$, and $S^-$, containing all negative terms of $S$. Both of these have two very important properties. First, the terms that are being added get smaller as the series goes on. This follows from the fact that $S$ converges, and no series where the terms get bigger further on can converge. Second, both $S^+$ and $S^-$ diverge. To prove this, note that $S^+-S^-$ diverges, because $S=S^++S^-$ is conditionally convergent. Now, if both $S^+$ and $S^-$ converge, so must $S^+-S^-$ and we have a contradiction. If only one of them converges, then $S^++S^-$ must diverge and we have another contradiction. Both of them must therefore diverge.

To prove the theorem properly, pick any number $M$. We then create another series, $S_1$. This is made up of all the terms from $S$, rearranged. Specifically, we add terms from $S^+$ until the result is greater than $M$, and then we add terms from $S^-$ until the result is less than $M$. We then compensate with more terms from $S^+$, then from $S^-$, and so on. Since both $S^+$ and $S^-$ diverge, we will never be in a situation where $S_1$ is too much less than $M$ for us to make it greater by adding $S^+$ terms, or too much greater than $M$ for us to eventually make it smaller. Also, because terms of both $S^+$ and $S^-$ get smaller and smaller, we can approach $M$ much more closely as we add more terms. Think of it like parking a car. If you move six feet at a time, you are going to miss the parking space by much more than if you only move six inches. We’re basically moving the car backwards and forwards in ever-smaller amounts, trying to get it as close to $M$ as possible. It shouldn’t surprise you to hear that if you keep doing this to infinity, the rearrangement converges to $M$.

We can even make a conditionally convergent series diverge. First, add enough terms of $S^+$ to make the result bigger than $1$. Then, add $S^-$ terms to make it less than $-1$. Then, add more $S^+$ terms to make it bigger than $2$, $S^-$ terms to make it less than $-2$, and so on. This rearrangement zigzags back and forth, never settling down to a well-defined sum, and so it cannot converge to any given value.

Much safer than messing around with conditionally convergent series. Image: Flickr user ZioDave, CC BY-SA 2.0.

Conclusion

This theorem is one of my favourites in all of mathematics, primarily because it’s so unexpected. If I were ever to run a mathematical con, I would try to use it somehow. It really doesn’t seem to make any sense: the order in which numbers are added can’t possibly change their sum, can it? On the other hand, the proof works and, if you experiment with adding a hundred, a thousand, or a million terms of various conditionally convergent series, the theorem does seem to hold up. I even wrote a program once to generate rearrangements. Given $M$, it would output the pattern of terms from $S^+$ and $S^-$.

The other thing I like about this theorem is how easy it is to prove. The last really “famous” proof in popular culture was Andrew Wiles’s proof of Fermat’s Last Theorem. This took over one hundred pages and was impossible even for many professional mathematicians to understand. I have heard the argument that, in this age of hundred-page proofs and thousand-gigabyte calculations, understandable mathematics is impossible to write. That could not be more ridiculous. Riemann’s theorem is still interesting and useful today, and it should be possible for most people who are interested in mathematics to follow the proof.

A version of Kyle’s story originally appeared on my blog, The Wandering Mathematician.

Michael Kielstra goes by many names. As 13Clocks, he has written a number of websites and Android apps. As the Wandering Mathematician, he maintains a weekly mathematical blog. As himself, he is a student in England and an avid reader of Chalkdust.

More from Chalkdust