# The big argument: Is dy/dx notation better than y’ or ẋ?

## Yes: dy/dx all the way, argues Ellen Jolley

Do you take me for an engineer, sir? The only acceptable notation for a derivative is the original notation created by Leibniz: $\mathrm{d}y/\mathrm{d}x$. This is the only notation that demonstrates precisely what the derivative is: the limit of the change in $y$ over the change in $x$ as the change in $x$ tends to zero. I will not stand idly by as the integrity and beauty of the derivative is lost in a sea of dashes and dots.

Anyone who insists on such primitive notation clearly could not have ventured far in their study of calculus: any student of multivariable calculus can see plainly that these pathetic diacritics are simply not up to the task. A simple extension to Leibniz’s notation allows me to write with ease any order of partial derivative with respect to any number of variables I choose, meanwhile the dashing-dotting hooligans are left scrambling.

High schoolers can also reap the benefits of Leibniz’s original notation: they may initially be bemused by the concept of a fraction that is not a fraction—but as soon as they study integration by substitution, off and away they go, writing $\mathrm{d}u = 2x\,\mathrm{d}x$ and so on. What exactly are we to do with $u’$? Prime-root it? And I suppose $\dot{u}$’s reciprocal is $\begin{smallmatrix}\displaystyle u\\ \cdot \end{smallmatrix}$?

## No: dy/dget in the bin, argues Sophie Maclean

As mathematicians, we love dealing with fundamental truths and as truths go, ‘humans are lazy’ is about as fundamental as it gets. Mathematicians, as a subset of humans (it’s true—I looked it up) must therefore also be lazy. And we can see empirical evidence of this—I once sat staring at a problem for an hour trying to work out which method of solving it required the least writing. I’m pretty confident the phrase ‘work smart, not hard’ was invented for mathematicians.

So then why would anyone ever write $\mathrm{d}y/\mathrm{d}x$?! The effort it takes when compared to $\dot{x}$ is vast. Furthermore this will occur repeatedly throughout a paper! If you thought writing $\mathrm{d}y/\mathrm{d}x$ out by hand is slow, wait until you try typing it in LaTeX. I should point out here that I’m not the one typesetting this argument, hence I have no qualms about repeatedly writing $\mathrm{d}y/\mathrm{d}x$. $\mathrm{d}y/\mathrm{d}x$, $\mathrm{d}y/\mathrm{d}x$, $\mathrm{d}y/mathrm\{d\}x$.

It’s also so much quicker to read $\dot{x}$ than $\mathrm{d}y/\mathrm{d}x$. I’m all about the marginal gains, but in this case the gains are on an astronomical scale! And then you get on to the environmental impact. Wasting paper space writing $\mathrm{d}y/\mathrm{d}x$, when $\dot{x}$ does exactly the same job, is frankly not justifiable. What would Greta say?

# The big argument: Is the Einstein summation convention worth it?

The Einstein summation convention is a way to write and manipulate vector equations in many dimensions. Simply put, when you see repeated indices, you sum over them, so $\sum_{i=1}^N a_i b_i$ is written $a_i b_i$ for example.

## Yes: worth it, argues Ellen Jolley

This debate boils down to just one question: how much of your life do you spend doing tensor algebra? Those of us who undertake a positive amount of tensor algebra or vector calculus know that the goal is to be done with it as fast as possible! Try tensor algebra even five minutes without using the summation convention—I promise you will tire of constantly explaining “yes, the sum still starts from $1$, and yes, it still goes to $N$.”

You’ll scream, “All of them! I am summing over all indices! Obviously! Why’d I ever skip some??” If you’re confused how many you’ve got, use this simple guide: physicists use four; fluid dynamicists use three; and Italian plumbers use two. Wouldn’t it be nice to avoid saying this in every equation?

You may cry that it’s easier to make mistakes with the convention; but for applied mathematicians, the joy comes in speeding ahead to the answer by any means—time spent on accuracy and proof is time wasted. And as the great mathematician Bob Ross said: there are no mistakes, just happy little accidents!

## No: not worth it, argues Sophie Maclean

Before writing this argument, I had to Google ‘summation convention’ which is all the evidence I need for why it’s just not worth it. I’ve learnt how to use the convention—multiple times! In fact, I’d say it’s something I’m able to use, yet I’m still not sure I know exactly what it is.

Some of our readers won’t have ever heard of it (which is one strike against it). Some have heard of it but won’t know much about it (another strike). But I guarantee none would be confident saying they can use it without making any errors (if you think you would be, you’re in denial).

We don’t even have need for the convention! We already have a suitable way to notate summation:
$\sum$
It’s taught to schoolkids. There is no ambiguity. And it’s so much less pretentious. Yes, the summation convention is fractionally faster to write out, but mathematicians are famed for being lazy and aloof—maybe dispensing with it is all we need to break that stereotype!