On Monday February 26, we’ll be presenting the results of last term’s Three Pillars of the Digital project as a Monday Evening Lecture (5:30pm in E002, KX). We’ll say a bit about what we did but, more importantly, raise some points for discussion about the place of maths and the digital in creative arts education.

The last talk we gave in this programme, two years ago, outlined the goals and outlook of the Fine Art Maths Centre as a whole. Rich started off with Part 1, accompanied by a silent edit of this video of John Milnor lecturing:

YouTube Preview Image

Part 2 is here.

Part I

The hand we’ve been watching belonged to John Milnor. Its movements were made in January 1965, in a lecture theatre in Denver, Colorado, before a large audience of professional mathematicians from a wide variety of fields. It writes, draws, scribbles and gestures; its shadowy presence reminds us that mathematics is something we make with our hands.

The talk lasted about an hour, and this film shows more or less everything he wrote. We see symbols that look somewhat like algebra, perhaps the kind of thing most of us expect maths to look like. But we see other things too: diagrams involving arrows, representational sketches and sentences written in something close to ordinary English. These different codes are not isolated but cross-refer: together they map out a sort of conceptual territory.


What was being charted in this talk was a new and mysterious country to all but a handful of those present: the field of differential topology. It had been invented by Milnor and his contemporaries over the previous decade. In 1956 he had proved a surprising result about high-dimensional spheres using radical new techniques, and it was these that he sought to describe to his audience.

The field Milnor had invented was a new branch of topology. Topology studies how spaces can be connected together; not just real, physical space but any space that can be imagined. Topologists can study high-dimensional spaces, spaces that are curved in seemingly impossible ways, spaces that are radically discontinuous, or that have degenerate points where strange things happen. Topologists ask: just what could space be like, and how could we describe it?

It’s a qualitative and global subject. The spherical topology of the Earth’s surface, for example, can only be appreciated from a distance. Up close, it might just as well be flat. Topology stands back and looks at the whole space and asks questions like: Does it have holes in it? Or: Starting from one point, can I find a path that leads to any other point? If I walk around the space long enough, will I find I’ve come back to where I started, or perhaps that I’ve been reflected from left to right, as if in a mirror? What kinds of pictures can I draw inside this space?

These are qualitative questions in the sense that they don’t seem to ask for numerical answers. They have nothing obviously to do with measurement or calculation. They ask about the properties of a space, not lengths, angles, areas, volumes. This is what separates topology from geometry.

In fact, at the beginning of the subject in the late nineteenth century this focus separated topology from almost all fields of maths. Existing techniques weren’t very obviously applicable to it, and it got a reputation for being a very exotic and rather specialised subject. Yet for this very reason, topology can be learned by anyone: mathematical expertise really isn’t needed, or even very helpful.

What Milnor was introducing to assembled members of the Mathematical Association of America was an application of calculus to topology. This was an intrinsically strange idea. Differential calculus studies only “local” phenomena: the things that happen very close to a single point. What’s more, it had always been part of geometry: it measures and calculates. It seems designed to ignore the very qualitative, global properties of space that topologists were interested in.

Calculus, though, was a much older and more mature set of mathematics. It had its roots in the physics of Isaac Newton and was developed into a fully-fledged theory by French mathematicians of the early 1800s. Throughout the nineteenth century it had evolved in parallel with science and engineering, becoming a sophisticated language that could express theories of electromagnetic fields and the statistical behaviour of gasses. By 1915 even the most abstract parts of geometry had found applications in Einstein’s general relativity. These tools were well-understood and widely taught.

But nobody thought they might be useful for topology. The spirits of the two endeavours seemed diametrically opposed. Yet there were precursors. In 1931 Georges de Rham had proved that the behaviour – measured by calculus – of a process in a small region around a point depends on the topology of the whole space: features like holes, even if far away, exert a mysterious influence. The qualitative structure of the whole space produces detailed, local phenomena.

Milnor and his contemporaries turned this around, asking what calculus could do for topology. It turned out it could do a lot.

Heavy Machinery

For all the richness of the text Milnor produces, you may notice that there are only a few numbers, no more than you might see in a talk on any topic, and there are no complicated calculations. This is not merely because the talk is intended to be an overview. Modern mathematics is at least as much about concepts as it is about numbers. Problems that can be solved by mere number-crunching or algebraic juggling are easy; even before the invention of the computer, individuals with great reserves of patience and meticulous attention to detail tended to solve such problems fairly quickly. They may have interesting consequences but they aren’t usually intellectually thrilling in themselves.

The culturally important face of mathematics is almost always conceptual, in the literal sense that mathematicians invent concepts and connect them together in ways that make new ways of thinking possible.

Constructing such heavy machinery is hard work; part of the appeal, of course, is that it lightens subsequent labour. A famous contemporary of Milnor’s, Alexander Grothendieck, also favoured this approach:

The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration. . . the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it. . . yet it finally surrounds the resistant substance.

An ocean of abstract mathematics was, on this analogy, capable of eroding away the most obstinate of difficulties. Although Grothendieck’s machinery was at least as hard for his contemporaries to learn as Milnor’s, it too showed its worth by producing an understanding of geometry that seems impossible without it. Perhaps it is no coincidence that one of its central concepts, the structure known as a “sheaf”, is a device that explicitly connects the local and global aspects of space.

Proof and Concepts

The early twentieth century was dominated by the rise of formal logic: the philosophical legacies of Frege, Peirce and Russell. In mathematics David Hilbert’s formalist project cast a long shadow, and in France the Bourbaki group set to the task of rewriting the entire subject in a language of dry austerity. They were suspicious of appeals to intuition and their books – even books on geometry –shunned illustrations of any kind, as if putting aside childish things. Their mathematics was above all symbolic: a modernist mathematics, obsessed with its own purity. It retreated into a state of pristine isolation from everything else, turning firmly inwards and becoming accessible only to a tiny elite.

It was here that the central task of the mathematician was defined as the making of proofs.

The post-war years saw a flowering of visual and spatial mathematics. It has not led to the abandonment of symbolic methods. Rather, a polysemous practice has emerged in which various codes coexist and interact. Milnor produces this practice in his lectures spontaneously and, I suspect, largely unconsciously; it’s just how the subject looks to him. Much later he said:

I think most textbooks I have written have arisen because I have tried to understand a subject. […] I have a very visual memory and the only way I can be convinced that I understand something is to write it down clearly enough so that I can really understand it.

Although he says he “writes it down”, he connects this with a “visual” rather than “verbal” form of cognition; perhaps this is not surprising when we see what he means by “writing”.

Many maths textbooks read like this, especially those dealing with specialised topics. They are like conceptual portraits: images of the territory of the subject that the author has managed to form by her own idiosyncratic process. Each reader needs to form their own map, and each will be different: this is why maths books are not to be read like novels, or even like works of philosophy.

In the distant past, a mathematician spent much time engaged in laborious calculation: there are tales of seventeenth and eighteenth century discoveries that were only made possible by almost superhuman feats of computation. Such tasks can now be carried out by the phone in your pocket, and their status as “mathematics” has fallen very low indeed. Though we still make schoolchildren learn to carry out long division by hand nobody really does it that way, and many of us struggle to imagine why it’s still on the curriculum. Certainly such drudge-work is unlikely to strike a mathematician as particularly mathematical.

The change of focus from calculation to proof was a response to two things: a rise in the importance of formal method and a decline in the importance of computation. The latter was accelerated enormously by the appearance of ever more practical and powerful calculating machines.

But as the twentieth century went on, some proofs began to emerge that were incredibly long and complicated. Famously, the proof of the Four Colour Theorem was produced using a computer that made billions of calculations; no human being could ever read and understand every step. Even proofs produced entirely by humans became so complicated they could turn out to contain very obscure but serious flaws. Andrew Wiles’s original proof of Fermat’s Last Theorem is one example – he managed to fix the problem once it was discovered. In 1999 Vladimir Voevodsky discovered  a severe mistake in one of his results that had been published years previously and used by many other researchers; this, as he puts it “got him scared” and led him to consider how the extremely subtle, complicated proofs of contemporary maths can be better organised and managed.

And today we may be on the brink of another revolution. Computer software already exists that can check a mathematical proof for correctness, although the process is currently unwieldy and time-consuming. In my lifetime, I have no doubt that it will improve vastly, to the point where highly complex proofs can be checked rapidly. Perhaps, too, the same timeframe will bring us software that can make its own proofs; work is already well underway in that area, too.

This will not just be a matter of greater convenience or efficiency; it will completely revolutionise our idea of what it means to do mathematics.

If the pocket calculator rang the final death knell of the mathematician as athlete of arithmetic, perhaps the computerised proof system will bury the old joke that a mathematician is “a machine for turning coffee into theorems”. Perhaps we will turn this around and come to think of proofs as something best left to machines. If so, maths will become more qualitative, descriptive, intuitive, synthetic, conceptual.

It’s already all of these things: what I mean is, these aspects may come to predominate over proof just as it, in its turn, predominated over calculation.

The following saying is attributed to the geometer Federigo Enriques: “It is a nobleman’s work to find theorems, and it is a slave’s work to prove them. Mathematicians are noblemen.”

Today, of course, remarks about noblemen and slaves may strike us as rather ill-considered. He probably said it, if he did at all, some time around 1900, when the tide was turning against him and proof-making was becoming the primary activity of his field: so his contemporaries and immediate successors would have found it jarring, too. Perhaps in our present century it will be recast, and people will come to say: It is a computer’s work to prove theorems, but an artist’s work to find them.

(Part 2 will be published on Monday.)