Devon holiday

Last week we were on holiday, with Liz in Yealmpton (a village with a remarkable sequence of consonants in its name) on the river Yealm in west Devon.

Devon holiday

Liz had organised a remarkable holiday for us in this attractive part of England. There are many National Trust properties in west Devon and east Cornwall; we visited two (Lanhydrock, near Liskeard, and Saltram, on the outskirts of Plymouth). On Bodmin Moor we saw the stone inscribed by King Doniert, last King of Cornwall, asking for prayers for his soul; the Hurlers (two circles of standing stones, which according to a legend from much later were people turned to stone for playing at hurling on the Sabbath), and the Cheesewring (a rocky tor named after the mashed apples used to make cider).

Dartmoor was not the bleak wilderness I had imagined: a country of rolling hills and river valleys, rich grass and some small cultivated fields, ponies and sheep, villages and towns. We visited the Garden House, the clapper bridge at Postbridge, and the ancient oak wood (a remnant of the pre-human vegetation on the moor) of Wistmanswood. Liz even showed us the house where she was born.

We saw Burgh Island, just off the mainland and connected by a sand causeway which is covered by water at high tide. On the top once stood St Michael’s monastery, a third member of the trinity containing St Michael’s Mount in Cornwall and Mont Saint Michel in Brittany. It is most famous now for the hotel in which an episode of the television series Poirot was filmed.

We visited Totnes, with its ancient Guildhall containing two Australian connections: a monument to William Wills, of Burke and Wills fame, a local boy; and a display about Baron Birdwood, first commander of the Australian and New Zealand Army Corps (ANZAC).

We did several walks near Yealmpton, including two short sections of the South West Coast Path (with views of the Great Mew Stone and the Eddystone Light) and country walks including one to Steer Point on the Yealm estuary (like all rivers here it ends in a drowned valley which is now full of yachts).

We met several members of Liz’s family, and ate and drank in the village pub.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Infinity and Foundation

After the reviving effect of a week’s holiday, I have been thinking about Zermelo–Fraenkel set theory, inspired by a very nice student project I supervised (about which I hope to say something here sometime soonish). I have come across a couple of paradoxes. I believe I know the way round the first, but for the second I am unsure.

Warmup: Peano arithmetic

To begin, consider Peano arithmetic, the usual axiom system for the natural numbers. The basic ingredients are zero and the successor function: the axioms assert that zero has no predecessor but every other element has a unique predecessor (where “predecessor” means “inverse image under the successor function”). Order, addition and multiplication are functions of two variables defined by means of the successor function.

Of course, to a mathematician, the most important property of the natural numbers is that you can reach any number by starting at zero and applying the successor function repeatedly. Unfortunately, you can’t say this in first-order logic. So, instead, Peano requires that, for any property P expressed in the first-order language of the theory (with zero, successor, order, addition and multiplication as the non-logical symbols), if P(0) holds, and if P(n) implies P(n+1) for all n, then P(n) holds for all n.

Now the upward Löwenheim–Skolem theorem tells us that this first-order theory (assuming it is consistent) has unintended models; the axioms hold in structures of arbitrarily large cardinality. Moreover, the theorem of Engeler, Ryll-Nardzewski and Svenonius tells us that it even has unintended countable models. Can we say anything about these non-standard models?

Any such model contains “infinite integers”. One way to see how these arise is as follows. Adjoin a constant symbol c to the language, and add as axioms the sentences c > n for all natural numbers n (where, for example, 248 is the term in the language obtained by applying the successor function to zero 248 times). Now any finite subset of the new axioms is consistent with the Peano axioms (if c > n0 is the formula involving the largest natural number in the set, interpret c as any number greater than n0). So, by the Compactness Theorem of first-order logic, the entire set is consistent with the Peano axioms, and there is a model in which the element interpreting c is greater than every natural number.

The moral is that we can’t keep infinity out of mathematics (if we restrict ourselves to first-order logic).

The existential axioms of ZF

The following discussion takes place in Zermelo–Fraenkel set theory, with or without the Axiom of Choice (this makes a difference but not a serious one).

Raymond Smullyan asks, in one of his books, “Why is there something rather than nothing?” Most of the axioms of ZF are conditional, “If such-and-such sets exist, then some other set exists”. In the usual formulation, two axioms assert that something rather than nothing exists. The first, paradoxically, is the Empty Set axiom, asserting that there exists a set e such that, for all x, x is not a member of e. Note that, in the presence of the other axioms, this is equivalent to saying “There exists a set”. One way round, the implication is clear. In the other direction, if y is any set, then the set
{xy : xx} exists by the Selection Axiom, and clearly it is the empty set.

The other existential axiom is the Axiom of Infinity, which asserts that an infinite set exists, or, more or less, that the set of natural numbers exists. (As just explained, the Axiom of Infinity thus implies the Empty Set Axiom, but I will keep the latter for expository purposes.) This asserts that existence of a set S with two properties:

  • the empty set belongs to S;
  • if x is in S, then so is x∪{x}.

Then the intersection of all subsets of S satisfying these two properties is the natural numbers (in the von Neumann formulation).

Such a set is clearly infinite. But how do you prove that the existence of any infinite set implies the existence of a set with this property (without using the Axiom of Choice)?

Hereditarily finite set theory

If you don’t like the infinite, you might like to work in a set theory in which every set is finite; thus, given a set S, not only is S finite, but all elements of S are finite sets, all their elements are finite sets, and so on.

There is a very convenient model. I don’t know who invented this, but it is essentially the same as Richard Rado’s construction of the “random graph” (the unique countable homogeneous universal graph). The elements of the model are the natural numbers. The element x is a member of the element y if the xth digit in the base 2 expansion of y is 1 (that is, y is the sum of distinct power of 2, of which 2x is one).

This is nice, but as with Peano arithmetic, we get infinite surprises: the theory of this model has models which contain infinite sets. For let T be the set of first-order sentences satisfied by this model. Add a constant symbol c to the language, and sentences to say that c contains the first n elements of the model. (Each of these can be uniquely identified: for example, 5 = {0,{{0}}}, where 0 is the empty set.) As before, any finite set of these new axioms is consistent with T; so the entire set is consistent with T, and there exists a model of T containing a set which has as members all of the infinitely many elements of the model.

All this is not so surprising. The paradox comes from the fact that included in T is the negation of the Axiom of Infinity. So a model of a theory containing the negation of the Axiom of Infinity can contain infinite sets.

This suggests that the Axiom of Infinity is not really equivalent to the condition that there is an infinite set.

The Axiom of Foundation

The Axiom of Foundation has the job of forbidding infinite descending chains under the membership relation. However, it cannot be stated in this way by a first-order sentence, so a more ingenious method is required.

The standard formulation is that each non-empty set x has an element y such that xy is empty.

Suppose that there were an infinite descending chain

… ∈ x2 ∈ x1 ∈ x0.

Using the Axioms of Replacement and Choice, we can form a set S consisting just of elements xn satisfying these relations. But then this set fails the Axiom of Foundation; for, given any of its members, say xn, the element xn+1 is contained in both S and xn.

Note that the Axiom of Choice is used in this proof, since we have to choose one of the possible elements at each stage. The problem does not arise if the “infinite descending chain” is actually periodic. Indeed, since we may assume the Axiom of Choice here, this does not affect the paradox coming up.

A slight modification of the previous method provides us with the paradox. Adjoin infinitely many constants cn to the language, and all the formulae asserting that cn+1 is a member of cn. Any finite set of these formulae is consistent with ZFC, and again by compactness the whole collection is. But this asserts that an infinite descending chain exists, and so the Axiom of Foundation fails.

It may be that once we take the reduct which forgets the elements named by the constants cn, these elements no longer form a set of the model. But this shouldn’t matter if we have the Axiom of Choice to allow ourselves to choose elements with the right properties.

Can anyone give a better resolution?

Posted in exposition | Tagged , , , , , | 5 Comments

BCC26, 2

The first lecture on Thursday was by Benny Sudakov, on robustness of graph properties. For purposes of exposition, he had decided to focus on one particular property, that of being Hamiltonian. Dirac’s famous theorem tells us that a graph on n vertices with minimum degree at least n/2 has a Hamiltonian circuit. Robustness of the property of Hamiltonicity could mean one of several things: there are many Hamiltonian circuits; there are many pairwise disjoint Hamiltonian circuits; an enemy would have to make significant changes to the graph to destroy the property; and so on. All these hold (and can be quantified). Moreover, not only is Hamiltonicity robust in a graph satisfying Dirac’s condition, but it tends to be robust in a random graph where edge probabilities are large enough to force the property to hold.

The second plenary was by Gi-Sang Cheon, on new applications of Riordan arrays. This was a rather technical talk, full of information. Unfortunately, since Gi-Sang was substituting at rather short notice for a plenary speaker who was unable to attend, he didn’t have an article in the book, and when I asked him afterwards he said that the material he was speaking about was too new to be in any survey yet. So I won’t attempt to describe the talk.

Of the contributed talks, there were several worth mentioning, but one of the best was by Iain Moffatt. He told us that the crucial property of the Tutte polynomial is the deletion-contraction relation, and in any situation where one has such a relation one should expect a “Tutte polynomial”. Why, he asked, are there three different Tutte polynomials for graphs embedded in surfaces in the literature? For the answer it is necessary to look more closely at deletion and contraction. Even if we use the nicest possible definition, where graphs are embedded so that faces are topological discs, trouble can arise when you delete or contract an edge. If you delete an edge going round a hole, a face of the resulting graph can fail to be a disc; and if you contract an edge which is a non-contractible cycle, you pinch the surface to a point, so you must either put up with a pseudosurface, or take your scissors and cut the surface at that point. Thus you actually expect four Tutte polynomials (allowing or disallowing non-simply-connected faces, and allowing or disallowing pseudosurfaces).

In the final session, I had to miss some talks I would like to have heard in order to give my own, on synchronization and separation in the Johnson association scheme. The slides are in the usual place.

Dinner was in the Barony Hall, a former huge church just beyond the cathedral, now used by the University of Strathclyde. A very good dinner, very congenial and interesting conversation, and after dinner a ceilidh, at which I guess the majority of the participants had at least one dance. After the band stopped and the staff were clearing the hall, we headed back to the hotel.

Friday morning saw us check out of the hotel in good time to be at the conference by 9, where I was introducing Bill Chen. He had chosen well; his topic, the spt-function of Andrews, was very technical, but his talk was well judged for the morning after the conference dinner, with little episodes into historical detail (Euler, Hardy and Ramanujan, Dyson, Andrews, and others), and even a quote from Rabindranath Tagore on the relation between simplicity and complication. (The definition of rank which led to the “combinatorial” proof of two of Ramanujan’s conjectures is very simple, but the proof is still very complicated.)

One contributed talk stood out for me: Bridget Webb’s. She and Daniel Horsley have constructed uncountably many countable homogeneous Steiner triple systems. (I need to point out here, as Bridget did, that we are regarding a Steiner triple system as a functional structure – a quasigroup – whose substructures correspond to subsystems, rather than as a relational structure whose substructures are partial Steiner triple systems: a structure is homogeneous if any isomorphism between finitely generated substructures extends to an automorphism of the structure. The construction required results on embedding partial Steiner triple systems which go well beyond what was previously known. I was reminded of Cherlin’s classification of homogeneous digraphs, where there are uncountably many of a general type together with a few sporadic examples. I suspect that turning the construction here into a classification will be extremely difficult!

The final talk of the conference, which was also the Rado lecture, was given by Vít Jelínek on Ramsey-type problems in permutations. Since Strathclyde is one of the top centres of research on permutation patterns, this was a good choice of topic; Vít’s take on permutations is similar to mine, regarding a permutation as a set carrying two total orders, and he did indeed quote my classification of homogeneous permutations (but there was far more in the talk than that).

And so ended what I thought had been an excellent and memorable British Combinatorial Conference, and we all headed home (slightly delayed, in our case, by Sergey Kitaev, who had very kindly invited us to a barbecue at his home in Linlithgow, preceded by a very brief detour to Linlithgow Palace, the birthplace of Mary Queen of Scots).

Linlithgow Palace

Posted in events | Tagged , , , , | 2 Comments

BCC26, 1

About halfway through the British Combinatorial Conference, at last I have time to say something about it.

I am not just a combinatorialist, but I always feel that the BCC delegates are the group into which I naturally fit. It is always a pleasure meeting old friends and indeed new friends.

So to begin with the least interesting part: I am never really off duty at the BCC, since I have to chair the business meeting on Tuesday and the committee meeting on Wednesday, perform at the concert, give my talk, and run the problem session. Also, there are far too many contributed talks for me to go to more than a tiny fraction of them; I have already had to miss one talk by a co-author of mine, and others by colleagues or students.

One innovation this time has been the introduction of mini-symposia, of which there were three: on extremal combinatorics, on graph colouring, and on permutation patterns. The speakers in the mini-symposia were given a little longer than ordinary contributed speakers. These seem to have been popular, and look to be here to stay; I hope that the result is not perceived to be two classes of speakers (apart from plenary speakers). One highlight was a lovely talk by Bruce Sagan on descent polynomials and related things. A descent in a permutation π is a value i such that π(i) > π(i+1). MacMahon proved that the number of permutations with descents in specified positions (and no others) is given by a polynomial; these polynomials were ignored for a long time but are now of great interest, as are the related peak polynomials counting permutations with a given set of peaks.

On Monday night we had a civic reception in the amazing Glasgow City Chambers.

The dignitary who addressed us was well briefed. This is the 100th anniversary of the birth of Bill Tutte, who among other things was a Bletchley Park codebreaker who was instrumental in breaking the Lorenz cipher used by the German high command in World War 2. She knew this, and gave us a well-informed potted account about Tutte (but didn’t know that, like her, Tutte was a graduate in Chemistry; I told her this when we spoke later).

On Tuesday we had the conference concert, the highlight of which was Bruce Sagan leading us in singing Scottish folk songs including “Wild Mountain Thyme”.

Just a bit about the main talks. The first was by Daniel Horsley, who showed us some of the results that have been proved about graph decompositions by “local switching” (including embeddings of partial Steiner triple systems, cycle decompositions, and almost regular decompositions). He illustrated what he meant by “local switching” (a somewhat vague but fruitful idea) by showing us that, if a graph has a proper t-edge-colouring, then it has one in which the sizes of the colour classes differ by at most one.

Rosemary Bailey began by showing us why experimental design is the study between relations between partitions of a set, of which the most important in this context are refinement, orthogonality, and balance; by looking at the possible relations between three partitions, many interesting classes of designs emerge naturally, including triple arrays, which I have recently spoken about here.

Julia Böttcher and Peter Allen are here with their new baby, and bravely take turns in looking after it while the other goes to a talk. (Actually, it is one of the best-behaved babies I have seen, and I wouldn’t object if it came to my talk.) On Tuesday, Julia talked about large-scale structures in random graphs. What is the threshold for a large structure, or for all the structures in some family, to appear in the Erdős–Rényi random graph? And when do they appear robustly, so that an adversary would have to make many changes to destroy the property? This is a fertile meeting ground of extremal and random graph theory.

Robert Morris talked about “cellular automata”, but in fact the problems he discussed were essentially percolation problems. The set-up is that, initially, a finite set of cells in the plane square lattice are “infected”; at any time, a newe cell becomes infected if all the cells obtained from it by one of a given set of translations are infected (for a simple example, at least two nearest neighbours); an infected cell remains infected. Does the whole plane eventually become infected, and if so, how fast?

Wednesday brought us two of the best talks so far. Graham Farr celebrated Tutte by a lecture on his life and his early work in graph theory, illustrated by some remarkable photographs. This prompted me to remember my closest encounter with Tutte. This was at the 1973 British Combinatorial Conference in Aberystwyth; Rosemary had already related my encounter with Donald Preece at that conference. The railway line to Aberystwyth was, as often happens, closed by a flood, and so we had to take the train to Worcester from where coaches were provided. As the bus wound along narrow roads through the Welsh hills, I went forward to have a word with someone. As I turned around to return to my seat, the bus gave an especially violent jerk, and I was thrown into Tutte’s lap.

I have always considered Tutte’s book Graph Theory as I have known it remarkable for being both a textbook on graph theory and a historical account of Tutte’s own passage through the subject, explaining the connections between the very disparate things (polynomials, network flows, spanning trees, symmetry, determinants, …), and how he was naturally led from one topic to the next. Graham certainly agreed with this assessment!

Julia Wolf introduced us to the use of entropy methods in additive combinatorics; these typically give improvements on the “energy methods” (as the earlier techniques are now called). She worked all the time in the elementary abelian 2-group, or vector space over the 2-element field, where the Fourier transform is particularly easy to understand, and its basic properties easy to prove. This was the closest plenary so far to the material at the Finite Geometry summer school last week.

Posted in events | Tagged , , , , , , , , , | 3 Comments

A prize

Time for a little blast on my own trumpet.

At the meeting of the London Mathematical Society today, it was announced that I am the winner of the Senior Whitehead Prize this year.

The rules for the prize state, in part,

The SENIOR WHITEHEAD PRIZE is awarded, in memory of Professor J. H. C. Whitehead, a former President of the Society. It is awarded in odd-numbered years. […] The grounds for the award may include work in, influence on or service to mathematics, or recognition of lecturing gifts in the field of mathematics […]

The citation says,

PROFESSOR PETER CAMERON of the UNIVERSITY OF ST ANDREWS is awarded a SENIOR WHITEHEAD PRIZE for his exceptional research contributions across combinatorics and group theory. His fertile imagination and encouragement of others have sparked activity in many fields.

Cameron has long been a leading figure in combinatorics and permutation group theory, frequently impacting on other areas and, through beautiful insights and questions, initiating a stream of research. As an example, starting in 1976 (with the first result in a now rich literature on reducts of homogeneous structures), he has led research on infinite permutation groups. He identified oligomorphic permutation groups, which link to model theory, making connections to combinatorial enumeration and notions of randomness, and led to `Cameron’s constant’ for sum-free sets. Another 1976 paper, with Goethals, Seidel and Shult, makes links between finite graphs with least eigenvalue −2, root systems, and systems of lines at 60 and 90 degrees. His 1981 article on the implications of the classification of finite simple groups made explicit the new methods emerging for finite permutation groups, and he and many others have continued to reap rewards; examples include his proof with Praeger, Saxl and Seitz of the Sims Conjecture. He has papers on many topics in coding theory, design theory, finite geometry, and graph theory, often spotting key interconnections. In a current collaboration with Araújo and others stemming from the Černý conjecture in automata theory, he has opened up links between permutation groups, semigroups, and graphs.

As a lecturer, Cameron’s enthusiasm and lucidity make tricky arguments easy. He draws people to his subject, encouraging problem-solving for starting undergraduates, giving advanced ones a taste of research, and giving sparkling conference lectures. His list of around 40 PhD students includes, among many with academic careers, Eric Lander, the founding Director of the Broad Institute of MIT and Harvard.

Whoever wrote that knows, apart from much else, what I consider to be valuable in my work!

I feel immensely proud of this. Among other things, let me just note that I am only the third person to win both a Junior and Senior Whitehead Prize, the other two being Robert Mackay and Frances Kirwan. (Henry Whitehead, incidentally, was my mathematical great-grandfather; my father Peter Neumann also won the Senior Whitehead Prize in 2003, and though my grandfather Graham Higman did not, he won the Senior Berwick Prize in 1962 and was awarded the De Morgan Medal in 1974.)

Posted in Uncategorized | Tagged | 17 Comments

Finite geometry in Brighton

This week I am in Brighton, at a summer school on Finite Geometry aimed at PhD students, organised by Fatma Karaoglu.

Despite the aim of the summer school, there are a few people here who are certainly not PhD students, including old friends Jim Davis and David Wales.

There are four short courses, given by Simeon Ball, Anton Betten, Jef Thas, and me. Jef and I lectured twice a day for the first two days, and then bowed out to be replaced by the other two for the last two-and-a half days. There were also many contributed talks, and GAP sessions run by my colleagues Alexander Konovalov and Chris Jefferson.

My talks were loosely based on the third series of my Advanced Combinatorics lectures in St Andrews. They were centred around the classification of graphs with the strong triangle property: each edge is contained in a triangle such that any further vertex is joined to exactly one vertex of the triangle. These graphs fall into two infinite families and three sporadic examples. If this suggests links to the ADE affair, that is correct: it leads fairly directly to a classification of the ADE root systems. But it leads in other directions too. It is the classification of generalised quadrangles with three points on a line, and the proof slightly tweaked shows the non-existence of infinite quadrangles with this property. It generalises to the classification of graphs with the triangle property: each edge is contained in a triangle such that any further vertex is joined to one or all vertices of the triangle. This is the theorem of Shult and Seidel, an immediate predecessor of the celebrated Buekenhout–Shult theorem classifying all the finite-rank polar spaces.

The notes are on the course website, here.

Jef spoke about arcs and caps in finite projective spaces, from Segre’s classification of conics in projective planes over fields of odd order (including Segre’s important “lemma of tangents”) to results on generalisations where instead of points one considers subspaces of arbitrary direction, and the connections with linear MDS codes. A very full and detailed survey, and even more valuable when the notes become available (as they should be soon).

Anton Betten’s topic is computational results in finite geometry. He began with the observation that, just because a problem is hard, that doesn’t mean we shouldn’t solve it as best we can. He quoted me on this – it is a sentiment I agree with! He remarked that he has some difficulty with NSF grants precisely because computer scientists who referee his proposals think that the problems are too hard. After this he plunged into four particular problems. The first was the classification of things called BLT-sets, which give rise to (often new) projective planes and generalised quadrangles, among other things. He has pushed the classification up to q = 67; the numbers of examples do not grow too rapidly. The others are parallelisms in 3-dimensional projective space, optimal linear codes, and cubic surfaces. He showed us his app for showing features of the 27 lines on a cubic surface, running on his iPhone on the document camera – you can find this app, called “Clebsch”, on the app store if you have an iPhone. (Does it sound as if I know what I am talking about??)

Simeon is telling us more about arcs and caps, and applying the polynomial method to these questions. He and Michel Lavrauw have a theorem which is a considerable improvement of Segre’s in odd characteristic, and (with a lot of effort) also gives results in higher dimensions. He concluded the series by proving the prime case of the celebrated MDS conjectures. Simeon’s notes are also available, here.

In his last talk, Anton showed us some details of what the computer actually does in these classifications. Typically there is a huge bump which takes a lot of the time. In a nice piece of synergy, in Simeon’s last talk, he showed us a method which allows one to tunnel through the bump: a condition on a (rather large) matrix built from a small arc allows one to show that it has no extension to a large arc of some size. New developments should arise from this!

Among the contributed talks, I am only going to mention two. Michael Reynolds is developing a prgram, along the lines of GeoGebra, for doing metric geometry over finite fields. I am not terribly impressed with the app, which represents the affine plane over a finite field as a square grid: lines don’t look much like lines! But I like the philosophy, which is to avoid all use of real arithmetic, transcendental functions, etc. Thus, the squared distance is given by a quadratic form; orthogonality by the associated bilinear form; and instead of angle θ, he uses cos2θ, which can be found “rationally” using the cosine rule.

Sandra Kingan did her best to persuade finite geometers that they should be interested in matroids. She talked about the representability problem over GF(5). The difficulty of the problem is greatly increased by the fact that the same matroid can have different representations. But what are inequivalent arcs of the same size in a projective plane but different representations of a uniform matroid?

Thursday night brought a very enjoyable dinner in a restaurant in Hove. No pictures, I’m afraid; I forgot my camera! There will be a conference photo eventually …

Posted in events | Tagged , , , , , | 2 Comments


Conference logo

I’ve just been at a conference called “Kinks 2”, or that might be “knkx 2” (I never saw it written down). But look at the list of topics: dynamics, functional equations, infinite combinatorics and probability. How could I resist that?

Of course, with a conference like this, the first question is: is the conference on the union of these topics, or their intersection? The conference logo strongly suggests the latter. But as usual I was ready to plunge in with something connecting with more than one of the topics. I have thought about functional equations, but only in the power series ring (or the ring of species); but it seemed to me that sum-free sets would be the best fit: there is infinite combinatorics and probability; there is no dynamics, but some of the behaviour looks as if there should be!

The organisers were Adam Ostaszewski and Nick Bingham. So the conference was centred on their common interests. I will say a bit about this in due course. Unfortunately, for personal reasons, I was not able to travel south until Monday morning, and even getting up at 6am I didn’t get to the conference room until a little after 2pm. So I missed the opening talk by Nick Bingham (which no doubt set the scene, and according to Charles Goldie was a sermon) and the talk by Jaroslav Smital; I arrived shortly after the first afternoon lecture, by Vladimir Bogachev, had begun.

I will try to give an overview of what I heard, but will not cover everything at the same level. Of course, a topologist, analyst or probabilist would give a very different account!

Vladimir’s talk was something which tied in to the general theme in a way I didn’t realise until later. He talked about topological vector spaces carrying measures; in such a space, consider the set of vectors along which the measure is differentiable (in one of several senses), one of which is apparently called the Cameron–Martin space. He had something called DC and explained that he originally named it after a Russian mathematician whose name began with S (in transliterated form); he had used the Cyrillic form, never imagining in those days that one day he would be lecturing about it in London!

David Applebaum talked about extending the notion of a Feller–Markov process from compact topological groups to symmetric spaces, and considering five questions that have been studied in the classical case, asking how they extend. He wrote on the board, which didn’t actually slow him down much. David said “I want to prove something”, but Nick, who was chairing, said “No, just tell the story”.

Finally for the day, Christopher Good explained two concepts to us: shifts of finite type (this means closed subspaces of the space of sequences over a finite alphabet, closed under the shift map, and defined by finitely many excluded subwords), and shadowing (this is relevant if you are iterating some function using the computer; typically the computed value of f(xi) will not be exact, but will be a point xi+1 close to f(xi); we say that this pseudo-orbit is shadowed by the orbit of y if fi(y) is close to xi). People have known for a long time that these concepts are closely related, but he and his postdoc have a theorem to this effect. In a compact 0-dimensional space like Cantor space, their theorem says that a map has the shadowing property if and only if it is conjugate to an inverse limit of shifts of finite type. Over arbitrary compact metric spaces they also have a necessary and sufficient condition, but it is more complicated, involving the notion of semi-conjugacy. He explained that the aim of this kind of dynamics is to replace a complicated map on a simple space by a simple map (a shift) on a more complicated space (finite type).

I was the first speaker next morning. I arrived half an hour early, to find the coffee laid out but nobody else around. Soon Rebecca Lumb came along and logged in to the computer, so I could load my talk. I found that the clicker provided had the feature that the left button advances the slides, so I took it out and put in my own, which works the right way round. The talk went well, and I enjoyed a gasp of surprise from the audience when I displayed my empirical approximation to the density spectrum of a random sum-free set. My last slide, a picture of an apparently non-periodic sum-free set generated by a periodic input, was also admired. It was suggested that a series of such pictures, in striking colours, would be worthy of an art exhibition. The slides are in the usual place.

After a coffee break, Imre Leader spoke about sumsets (so not too far from my talk but not at all the same). As usual, he wrote on the board. The question was: if you colour the natural numbers with finitely many colours, is there an infinite set for which all pairwise sums have the same colour? The answer is yes if you take pairwise sums of distinct elements. (Colour the pair {i,j} with the colour of i+j. By Ramsey’s theorem, there is an infinite set with all pairs of the same colour; this does it!). The first surprise was that if you allow x+x as a sum as well, then it is impossible; he showed us the nice two-step argument for this. (The first step is simple; you can always colour so that x and 2x have different colours: take two colours, and give x the colour red if the exponent of 2 in the factorisation of x is even, blue if it is odd. The second step a bit more elaborate.) What about larger systems? He showed us why the answer is No in the integers, and No in the rationals, but (surprisingly) Yes in the reals, if you assume the Continuum Hypothesis (and indeed, Yes in a vector space of sufficiently large dimension over the rationls (precisely, bethω).

First after lunch was Dugald Macpherson, who talked about automorphism groups of countable relational structures, especially closed oligomorphic groups (and more specially, automorphism groups of homogeneous structures over finite relational languages). His talk was in three parts, of which he skipped the second for lack of time. The first part was stuff I knew well, the connection between Ramsey structures and topological dynamics (the Kechris–Pestov–Todorcevic theorem and what happened after). The second part would have been about simplicity. The third concerned the existence of “ample generics” and applications of this to many things including the small index property, uncountable cofinality, and the Bergman property, and then sufficient conditions for ample generics (with names like EPPA and APPA).

Eliza Jablonska talked about “Haar meager sets”, an ideal of subsets of a topological group having many similarites to the Haar null setes in the case of a locally compact group. Some, but not all, of the classic results about measure and category for the real numbers go through to this situation.

Finally, Janusz Brzdek talked about the generalised Steinhaus property. In loose terms, this says that, if A is a subset of a topological group which is “large enough and decent”, then AA has non-empty interior, and more generally, if A is as above and B is “not small”, then AB has non-empty interior. This kind of result has applications in the theory of functional equations, for example showing that if you have a solution of f(x+y) = f(x)+f(y) in a large enough and decent subset of G then this solution can be extended to the whole of G. There are also applications to “automatic continuity” (but I don’t know what this is). He started off with some very general results which apply in any magma (set with a binary operation) with identity. You have to redefine AB in such a case, since inverses don’t exist; it is the set of z for which z+B intersects A. He went on to a discussion of microperiodic functions (which have arbitrarily small periods): on the reals, such a function, if continuous, must be constant, and if Lebesgue measurable, must be constant almost everywhere. There are also approximately microperiodic functions, sub-microperiodic functions, and so on.

Then off to a pleasant conference dinner at the local Thai restaurant, where conversation ranged over doing, teaching and understanding mathematics, along with many other topics.

Wednesday was a beautiful day, so I walked in to the LSE, past St Pauls and down Fleet Street.

The first speaker, Harry Miller, was talking to us by skype from Sarajevo, since his health is not good enough to allow him to make the trip. The technology worked reasonably well, though the sound quality was not great and the handwritten slides were packed with information. He gave us half a dozen different conditions saying that a subset of the unit interval is “large” (not including the well-known ones, measure 1 and residual), and a number of implications between them and applications. One nice observation: are there compact sets A and B such that A+B contains an interval but AB doesn’t? Such sets had been constructed by “transfinite mumbo-jumbo”, but he showed us a simple direct construction: A is the set of numbers in [0,1] whose base-7 “decimal” expansion has only digits 0, 4 and 6, while B is the set using digits 0, 2 and 6.

After this, Marta Stefankova talked about hyperspace: this is the space K(X) of all compact subsets of a compact metric space X, with the Hausdorff metric. Given a map f, how do the dynamics of f on X relate to the dynamics of the induced map on K(X)? She introduced four kinds of “Li–Yorke chaos”: a map can be generically, or densely, chaotic or epsilon-chaotic. There are a number of implications between the resulting eight situations, but some non-implications as well; almost everything is known if X is the unit interval but in other cases there are still many mysteries.

Adrian Mathias, whom I haven’t seen for donkeys years, talked about connections between descriptive set theory and dynamics (he said, inspired by a trip to Barcelona, where he found the dynamicists sadly lacking in knowledge about descriptive set theory). The subtitle was “analytic sets under attack”. (The basic idea is that iteration of a map f is continued into the transfinite (I missed the explanation of how this is done), and x attacks y if there is an increasing sequence of ordinals so that the corresponding sequence of iterations of f applied to x has limit y.)

Dona Strauss talked about the space βN, the Stone–Cech compactification of the natural numbers (realised as the set of ultrafilters on the natural numbers). It inherits a semigroup structure from N, and the interplay of algebra and topology is very interesting. Her main result was that many subsets of βN which are very natural from an algebraic point of view are not Borel sets: these include the set of idempotents, the minimal ideal, any principal right ideal, and so on. (The Borel sets form a hierarchy, but any beginners’ text on descriptive set theory tells you not to worry too much, all interesting sets are Borel, and are in fact very low in the hierarchy.)

Peter Allen took time out from looking after his five-week-old baby to come and tell us about graph limits. It was a remarkable talk; I have heard several presentations of that theory, including one by Dan Kral in Durham, but Peter managed to throw new light on it by taking things in a different order. He also talked about some applications, such as a version of the theory for sparse graphs, and some major results already found using this approach: these include

  • a considerable simplification of the Green–Tao theorem that the primes contain arbitrarily long arithmetic progressions; and
  • a solution of an old problem: given two bounded sets A, B in Euclidean space of dimension at least 3, having the same measure, each of which “covers” the other (this means finitely many congruent copies of A cover B and vice versa), there is a finite partition of A into measureable pieces, and a collection of isometries (one for each piece) so that the images of the pieces under the isometries partition B.

Finally it was time for Adam’s talk. His title was “Asympotic group actions and their limits”, but he had the brilliant subtitle for a talk at the London School of Economics and Political Science: “On quantifier easing”. He explained how the notion of regular variation had led him to various functional equations, the simplest of which is additivity, and then he had discovered that these equations were actually the same up to a twist. There was quite a lot of technical stuff, and my concentration was beginning to fade, so I didn’t catch all the details.

That was the end of the conference, and we all went our separate ways.

Posted in events | Tagged , , , , , | Leave a comment