British Mathematical Colloquium, day 1

The British Mathematical Colloquium began in St Andrews today. I will try to report some highlights, but I am recovering from food poisoning, so my account may be a bit sketchy in some places. Also, of course, there is no promise that I will continue!

As well as standard BMC fare of plenary lectures and “morning lectures”, there are five workshops, in algebra, analysis & probability, combinatorics, dynamics, and (a St Andrews speciality) history of mathematics.

The organisers had worried that attendance might be down: it is at an unusual time of year, convenient for us but less so for some universities south of the border; and St Andrews is fairly remote. But in the event, the main Physics Lecture Theatre was almost full for the opening lecture. We were welcomed by the leading organiser, our Regius Professor, who (after doing an impression of an airline cabin-staff member in pointing out the fire exits) gave way to a recent holder of one of Britain’s other two Regius chairs in mathematics, Martin Hairer.

Martin began by pointing out two guiding principles in probability theory: symmetry (“equivalent” outcomes, such as heads and tails in a coin toss, should have the same probability), and universality (harder to describe, but roughly, if a random outcome depends on many different sources of randomness, its detailed description should not matter too much. His first example, apart from the Central Limit Theorem for something like an infinite sequence of coin tosses, was Brownian motion. The apparently random motion of pollen grains in water was explained in around 1905 by Einstein and Smoluchowsi (independently) as caused by many small nudges caused by random impacts from water molecules; they showed that the distribution should satisfy the heat equation, and this was verified experimentally by Perrin ten years later, the first experimental “proof” of the atomic hypothesis. But five years earlier, Bachelier had investigated the movement of share prices on the stock exchange, and had come to exactly the same conclusion (a precursor of the Black–Scholes equation). The rigorous mathematical description (a measure on the space of continuous functions) was given by Wiener, and the universality proved by Donder.

But there are other universality classes, such as the Ising model close to the critical temperature, whose limiting behaviour is far from “Gaussian”. Its behaviour is conjectured to be universal for many phase transition models, but this has not been proved. Yet another class consists of surface growth models, which describe many physical situations but also the shape formed by random Tetris pieces falling from the sky.

Martin’s title was “Bridging Scales”, and his interest was in process where the large-scale and small-scale scaling limits are known distributions (such as Gaussian and KPZ); what happens on intermediate scales? He and coauthors have a quite general theorem involving solutions of a certain type of stochastic differential equation, and there is not one behaviour, but a family of “canonical” behaviours described by a nilpotent Lie group. But I was floundering at this point.

I spent the next hour and a half in the History workshop, where I will describe only the first talk. Ursula Martin had a research grant to investigate how mathematical impact occurs. She started off by tracing the opposition from earliest times of two views: one expounded by G. H. Hardy (in A Mathematician’s Apology) and Vannevar Bush (whose report, containing very little data, was perhaps the inspiration for setting up the NSF, included the words “scientific progress … results from the free play of free intellects …); the other can be traced back to the founding of the Royal Society in the 17th century (“to extend the boundaries of Empire, and of arts and sciences”), and can be heard in almost every pronouncement of funding bodies today: we should deliver highly skilled people to the labour market, create spin-off companies, and so on.

The conclusions that Ursula and her colleague Laura Meagher came to were interesting. In a previous paper by Meagher and Nutley, impact was classified into five types (and reading this you see how impoverished the REF definition of impact is): conceptual, instrumental, capacity building, attitude or cultural change, and enhancing connectivity. Apart from the fact that essentially only the second and third count for REF, Meagher and Martin found that the REF protocols reinforce the myth that impact only happens in a linear order, whereas in fact it is a tangled web whose components cannot be separated.

The take-home messages were that impact is about people, not processes, and requires “knowledge intermediaries” rather than technology transfer offices; and, most important, we need more good stories.

The final talk of the day was the public lecture by Julia Wolf, on finding structure in randomness. It had attracted a fair number of people who were not BMC delegates (she asked for a show of hands, and was relieved at the result).

Her message was: if our object is random, or even just “looks like random” (technically quasi-random), then we can learn a lot about it, in particular counts of subconfigurations; if it is not random, then it is structured, and this gives us a lot of information; but even these two principles are not universally applicable. Her examples were taken from graphs and sets of integers. (Despite my last-but-one post, there is of course no problem in choosing a random set of integers!) For graphs, there is the theorem of Fan Chung, Ron Graham and Richard Wilson: if a graph has edge density p and has the number of 4-cycles which a random graph with edge probability p would have, then it shares many properties with random graphs. She said a few words, which actually made this clearer to me than I have ever understood before: the number of 4-cycles in a graph with given edge density is minimised by the random graph, and therefore interesting stuff clusters at that point.

A couple further snippets: She described the Green–Tao theorem on primes in arithmetic progression, and mentioned that the longest known such progression has length 26, but finding one for length 27 is completely out of computational reach; the Szemerédi regularity lemma says that, given ε, any large graph can be dissected into a number of pieces bounded by a function of ε such that the edges between almost all pairs of pieces form quasi-random bipartite graphs, but the function of ε is a tower of 2s of height ε−2; and there is a possible game-changer on the horizon, the polynomial method recently used so spectacularly by Croot, Lev and Pach and by Ellenberg and Gijswijt (which I described here).

Then there was a welcoming wine reception, but I had had enough for the day and slipped away.

About Peter Cameron

I count all the things that need to be counted.
This entry was posted in events and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.