I’ve just been at a conference called “Kinks 2”, or that might be “knkx 2” (I never saw it written down). But look at the list of topics: dynamics, functional equations, infinite combinatorics and probability. How could I resist that?
Of course, with a conference like this, the first question is: is the conference on the union of these topics, or their intersection? The conference logo strongly suggests the latter. But as usual I was ready to plunge in with something connecting with more than one of the topics. I have thought about functional equations, but only in the power series ring (or the ring of species); but it seemed to me that sum-free sets would be the best fit: there is infinite combinatorics and probability; there is no dynamics, but some of the behaviour looks as if there should be!
The organisers were Adam Ostaszewski and Nick Bingham. So the conference was centred on their common interests. I will say a bit about this in due course. Unfortunately, for personal reasons, I was not able to travel south until Monday morning, and even getting up at 6am I didn’t get to the conference room until a little after 2pm. So I missed the opening talk by Nick Bingham (which no doubt set the scene, and according to Charles Goldie was a sermon) and the talk by Jaroslav Smital; I arrived shortly after the first afternoon lecture, by Vladimir Bogachev, had begun.
I will try to give an overview of what I heard, but will not cover everything at the same level. Of course, a topologist, analyst or probabilist would give a very different account!
Vladimir’s talk was something which tied in to the general theme in a way I didn’t realise until later. He talked about topological vector spaces carrying measures; in such a space, consider the set of vectors along which the measure is differentiable (in one of several senses), one of which is apparently called the Cameron–Martin space. He had something called D_{C} and explained that he originally named it after a Russian mathematician whose name began with S (in transliterated form); he had used the Cyrillic form, never imagining in those days that one day he would be lecturing about it in London!
David Applebaum talked about extending the notion of a Feller–Markov process from compact topological groups to symmetric spaces, and considering five questions that have been studied in the classical case, asking how they extend. He wrote on the board, which didn’t actually slow him down much. David said “I want to prove something”, but Nick, who was chairing, said “No, just tell the story”.
Finally for the day, Christopher Good explained two concepts to us: shifts of finite type (this means closed subspaces of the space of sequences over a finite alphabet, closed under the shift map, and defined by finitely many excluded subwords), and shadowing (this is relevant if you are iterating some function using the computer; typically the computed value of f(x_{i}) will not be exact, but will be a point x_{i+1} close to f(x_{i}); we say that this pseudo-orbit is shadowed by the orbit of y if f^{i}(y) is close to x_{i}). People have known for a long time that these concepts are closely related, but he and his postdoc have a theorem to this effect. In a compact 0-dimensional space like Cantor space, their theorem says that a map has the shadowing property if and only if it is conjugate to an inverse limit of shifts of finite type. Over arbitrary compact metric spaces they also have a necessary and sufficient condition, but it is more complicated, involving the notion of semi-conjugacy. He explained that the aim of this kind of dynamics is to replace a complicated map on a simple space by a simple map (a shift) on a more complicated space (finite type).
I was the first speaker next morning. I arrived half an hour early, to find the coffee laid out but nobody else around. Soon Rebecca Lumb came along and logged in to the computer, so I could load my talk. I found that the clicker provided had the feature that the left button advances the slides, so I took it out and put in my own, which works the right way round. The talk went well, and I enjoyed a gasp of surprise from the audience when I displayed my empirical approximation to the density spectrum of a random sum-free set. My last slide, a picture of an apparently non-periodic sum-free set generated by a periodic input, was also admired. It was suggested that a series of such pictures, in striking colours, would be worthy of an art exhibition. The slides are in the usual place.
After a coffee break, Imre Leader spoke about sumsets (so not too far from my talk but not at all the same). As usual, he wrote on the board. The question was: if you colour the natural numbers with finitely many colours, is there an infinite set for which all pairwise sums have the same colour? The answer is yes if you take pairwise sums of distinct elements. (Colour the pair {i,j} with the colour of i+j. By Ramsey’s theorem, there is an infinite set with all pairs of the same colour; this does it!). The first surprise was that if you allow x+x as a sum as well, then it is impossible; he showed us the nice two-step argument for this. (The first step is simple; you can always colour so that x and 2x have different colours: take two colours, and give x the colour red if the exponent of 2 in the factorisation of x is even, blue if it is odd. The second step a bit more elaborate.) What about larger systems? He showed us why the answer is No in the integers, and No in the rationals, but (surprisingly) Yes in the reals, if you assume the Continuum Hypothesis (and indeed, Yes in a vector space of sufficiently large dimension over the rationls (precisely, beth_{ω}).
First after lunch was Dugald Macpherson, who talked about automorphism groups of countable relational structures, especially closed oligomorphic groups (and more specially, automorphism groups of homogeneous structures over finite relational languages). His talk was in three parts, of which he skipped the second for lack of time. The first part was stuff I knew well, the connection between Ramsey structures and topological dynamics (the Kechris–Pestov–Todorcevic theorem and what happened after). The second part would have been about simplicity. The third concerned the existence of “ample generics” and applications of this to many things including the small index property, uncountable cofinality, and the Bergman property, and then sufficient conditions for ample generics (with names like EPPA and APPA).
Eliza Jablonska talked about “Haar meager sets”, an ideal of subsets of a topological group having many similarites to the Haar null setes in the case of a locally compact group. Some, but not all, of the classic results about measure and category for the real numbers go through to this situation.
Finally, Janusz Brzdek talked about the generalised Steinhaus property. In loose terms, this says that, if A is a subset of a topological group which is “large enough and decent”, then A−A has non-empty interior, and more generally, if A is as above and B is “not small”, then A−B has non-empty interior. This kind of result has applications in the theory of functional equations, for example showing that if you have a solution of f(x+y) = f(x)+f(y) in a large enough and decent subset of G then this solution can be extended to the whole of G. There are also applications to “automatic continuity” (but I don’t know what this is). He started off with some very general results which apply in any magma (set with a binary operation) with identity. You have to redefine A−B in such a case, since inverses don’t exist; it is the set of z for which z+B intersects A. He went on to a discussion of microperiodic functions (which have arbitrarily small periods): on the reals, such a function, if continuous, must be constant, and if Lebesgue measurable, must be constant almost everywhere. There are also approximately microperiodic functions, sub-microperiodic functions, and so on.
Then off to a pleasant conference dinner at the local Thai restaurant, where conversation ranged over doing, teaching and understanding mathematics, along with many other topics.
Wednesday was a beautiful day, so I walked in to the LSE, past St Pauls and down Fleet Street.
The first speaker, Harry Miller, was talking to us by skype from Sarajevo, since his health is not good enough to allow him to make the trip. The technology worked reasonably well, though the sound quality was not great and the handwritten slides were packed with information. He gave us half a dozen different conditions saying that a subset of the unit interval is “large” (not including the well-known ones, measure 1 and residual), and a number of implications between them and applications. One nice observation: are there compact sets A and B such that A+B contains an interval but A−B doesn’t? Such sets had been constructed by “transfinite mumbo-jumbo”, but he showed us a simple direct construction: A is the set of numbers in [0,1] whose base-7 “decimal” expansion has only digits 0, 4 and 6, while B is the set using digits 0, 2 and 6.
After this, Marta Stefankova talked about hyperspace: this is the space K(X) of all compact subsets of a compact metric space X, with the Hausdorff metric. Given a map f, how do the dynamics of f on X relate to the dynamics of the induced map on K(X)? She introduced four kinds of “Li–Yorke chaos”: a map can be generically, or densely, chaotic or epsilon-chaotic. There are a number of implications between the resulting eight situations, but some non-implications as well; almost everything is known if X is the unit interval but in other cases there are still many mysteries.
Adrian Mathias, whom I haven’t seen for donkeys years, talked about connections between descriptive set theory and dynamics (he said, inspired by a trip to Barcelona, where he found the dynamicists sadly lacking in knowledge about descriptive set theory). The subtitle was “analytic sets under attack”. (The basic idea is that iteration of a map f is continued into the transfinite (I missed the explanation of how this is done), and x attacks y if there is an increasing sequence of ordinals so that the corresponding sequence of iterations of f applied to x has limit y.)
Dona Strauss talked about the space βN, the Stone–Cech compactification of the natural numbers (realised as the set of ultrafilters on the natural numbers). It inherits a semigroup structure from N, and the interplay of algebra and topology is very interesting. Her main result was that many subsets of βN which are very natural from an algebraic point of view are not Borel sets: these include the set of idempotents, the minimal ideal, any principal right ideal, and so on. (The Borel sets form a hierarchy, but any beginners’ text on descriptive set theory tells you not to worry too much, all interesting sets are Borel, and are in fact very low in the hierarchy.)
Peter Allen took time out from looking after his five-week-old baby to come and tell us about graph limits. It was a remarkable talk; I have heard several presentations of that theory, including one by Dan Kral in Durham, but Peter managed to throw new light on it by taking things in a different order. He also talked about some applications, such as a version of the theory for sparse graphs, and some major results already found using this approach: these include
Finally it was time for Adam’s talk. His title was “Asympotic group actions and their limits”, but he had the brilliant subtitle for a talk at the London School of Economics and Political Science: “On quantifier easing”. He explained how the notion of regular variation had led him to various functional equations, the simplest of which is additivity, and then he had discovered that these equations were actually the same up to a twist. There was quite a lot of technical stuff, and my concentration was beginning to fade, so I didn’t catch all the details.
That was the end of the conference, and we all went our separate ways.
I discussed related things here last year, but here is a reminder. A triple array is an r×c array whose entries are from a set of v letters, satisfying the following five conditions:
I will refer to these conditions as binary, equireplicate, row-balanced, column-balanced, and having adjusted orthogonality. (For more precise usage see the paper.)
These arrays were first considered by Agrawal (I described this here.) He noted that if a triple array has r+c = v+1, then it gives rise to a symmetric BIBD, whose points are the rows and columns and whose blocks are the letters and an additional symbol *; a letter is incident with the rows not containing that letter and the columns containing it, whereas * is incident with all rows and no columns. He conjectured that the process could be reversed, so that from a symmetric BIBD one could construct a triple array, except for the Fano plane and its complement (where a simple sudoku-like argument shows that it is not possible). The conjecture is still open; no counterexamples have been found, and in general it seems easy to find the appropriate array.
In the paper, we define a sesqui-array as an array satisfying (A0)–(A2) and (A4) but not necessarily (A3).
From a statistical point of view, we are interested in things like “efficiency” and “optimality” of a desigh, which are answers to the question: how accurately can we estimate, say, differences between treatment effects using this design? Among block designs, it is known that balanced incomplete block designs (if they exist) are optimal for all the standard measures of optimality. Analogously, triple arrays are best if they exist. But if a triple array does not exist, a sesqui-array may be a good substitute, since the adjusted orthogonality is important, and apart from that, we want the row and column designs to be as good as possible; if the row design is balanced then the array with the better column design should win.
In the paper, there are two main constructions. The first is a 7×36 sesqui-array with 42 letters, whose column design is the Sylvester design which I mentioned here: since it is a good design, hopefully the sesqui-array is also efficient. (Just a week or so ago, I got the computer to print out the array from my elegant construction, and found that it was fatally flawed and couldn’t be fixed; so this would have to come out of the paper, which would play havoc with the balance. Fortunately, within a day I had fixed it!)
The other is a construction of sesqui-arrays from biplanes which contains a small surprise.
A biplane is a symmetric BIBD with parameters (V,K,2). In other words, there are V points and V blocks; each block has K points and each point lies in K blocks; two points lie in two blocks, and two blocks intersect in two points. We have V = 1+K(K−1)/2. (I will use capital letters, as we have used the more common lower-case v and k as array parameters.)
There are only finitely many known biplanes, though no finiteness theorem has been proved. The standard reference on them is Section 15.8 of Marshall Hall’s book Combinatorial Theory, second edition. (Actually the data in this section contains several errors; I will list the ones we found at the end.)
There is a convenient description of a biplane that Hall uses. Fix a block B of a biplane. Now any further block meets B in two points, and any two points of B lie in one further block; so we can represent the other blocks by the 2-element subsets of B. If q is a point outside B, then the blocks containing q meet B in two points, which can be thought of as the edges of a graph H(q) of valency 2 (that is, a union of cycles), called a Hussain chain. It is easy to see that the biplane can be reconstructed from the set of Hussain chains on a block.
Now consider the array whose rows and columns are labelled by the points in B and the points not in B respectively. In row p and column q, we put the 2-element subset of B consisting of the two neigbours of p in the Hussain chain H(q). Now observe:
The surprise comes from looking at the other two 16-point biplanes (B_{2} and B_{3} in Hall). In both cases, even though some Hussain chains are hexagons, the sesqui-array turns out to be a triple array; and in both cases, the biplane associated to it by Agrawal’s construction is B_{1}. There is a nice explanation for this, which you can find in the paper. But it does show that the relation between isomorphism of designs and isomorphism of arrays is rather subtle.
For the record, the misprints we found in Hall’s book are:
May, she will stay [but will she?]
June, she’ll change her tune
August, die she must
In the same vein, John Wyndham, in The Chrysalids, wrote the obituary for her defunct parrot Strong’n’Stable:
Soon, they’ll obtain the stability they strive for, in the only way it’s granted: a place among the fossils.
(I quote from memory, that may not be quite right.)
It was interesting being in a constituency where the result was decided by two votes after three counts (giving different results, so I have very little confidence that the last recount was correct). Here is my theory of what happened.
North-east Fife was always going to be close between the SNP and the Liberal Democrats, with perhaps the Lib Dems having a slight edge. Then, shortly before the election, the Tory candidate woke up and started flooding the constituency with campaign literature. This didn’t give any policies; the only thing he said was that only a vote for him would “stop the SNP”. This of course was not true, but it possibly scared a few people thinking of voting Lib Dem to vote Conservative instead. So the result of his campaign was that the SNP candidate won.
This in my view is because of the “first-past-the-post” electoral system (misnamed, since there is no post which candidates have to pass), which encourages tactical voting rather than support of policies.
Impostor Syndrome is described by Wikipedia as
a concept describing high-achieving individuals who are marked by an inability to internalize their accomplishments and a persistent fear of being exposed as a “fraud”.
The article does go on to express some scepticism about whether it is a distinct personality trait.
I assume it came up in a day on gender diversity because of an assumption that women are more likely than men to suffer from it. (The inventors of the term, both women, specifically associate it with women.) But my first reaction, on having it explained to me, was, “Don’t all mathematicians have that?” Some colleagues attempted to assure me that no, some mathematicians are domineering and show no signs of doubt in themselves; but of course, that could be just a front, or overcompensation.
The report of the meeting, written by Isabella Scott, does say, “The issue of confidence and Impostor Syndrome took up the majority of our discussion. While these are primarily associated with women, almost everyone in the room had experienced it.” I am sure that the last statement is correct (assuming that “it” refers to Impostor Syndrome rather than confidence).
I think, on consideration, that it is something mathematicians may be prone to because of the honesty and clear-eyed judgment our subject requires. I am very aware of the amount by which my achievements fall short of what I set out to do, and cannot see why this isn’t clear to everyone else.
When I lectured on Probability to the first-year students at Queen Mary, my nightmare at the start of the course was that a student would ask “But what really is Probability?” I wouldn’t have been able to answer.
There is a distance-transitive graph on 36 vertices with valency 5 known as the Sylvester graph.
The graph was first constructed by Norman Biggs and his co-authors in their work on small distance-transitive graphs. I asked Norman about the name. He referred me to Coxeter’s paper “Twelve points in PG(5,3) with 95040 self-transformations” in Proc. Royal Soc. A 247 (1958), 279–293. Coxeter gives a table of duads and synthemes, from which the graph can be constructed. According to Norman, this might have been taken from H. F. Baker’s monograph A locus with 25920 linear self-transformations, in Cambridge Math. Tracts no. 39, 1946. (I have not seen this book.) Norman said in his email to me,
Nowhere can I find the name Sylvester graph, and I think we can be fairly sure that Sylvester himself never considered such a graph.
However, the graph is most easily constructed using Sylvester’s notions of duads and synthemes. I described these in one of the posts on the symmetric group here; you may wish to refresh your memory by looking at that post before continuing.
The Sylvester graph lives on a 6×6 square grid; the vertices are those of the grid, and the five neighbours of a vertex v are in different rows and columns from v and from one another. Two vertices in the same row or column of the grid are thus at distance greater than 2; in fact they are at distance 3.
The construction is as follows. The rows are indexed by a set A of six points, and the columns by the set B of six synthematic totals on A (in Sylvester’s terminology; in other language, these are the six 1-factorisations of the complete graph on A.) One important property is that two synthematic totals share a unique syntheme (or 1-factor). Now the adjacency rule is that the point (a,b) is joined to the point (a’,b’) if and only if the pair {a,a’} of points is a duad (an edge) in the syntheme shared by the totals b and b’.
The Sylvester graph lives inside the Hoffman–Singleton graph, as follows. Add fourteen new vertices: the elements of A and B, and two new symbols α and β. The new edges are:
You can find the Sylvester graph (including a picture and a number of links) here, on Robert Bailey’s webpage DistanceRegular.org.
Consider the block design whose points are the 36 vertices in the Sylvester graph and whose blocks are the 36 closed neighbourhoods (each consisting of a vertex and its five neighbours). Each block contains six points, one in each row of the grid, and one in each column. Two points lie together in at most two blocks: two points in the same row or column lie in no common blocks, two points with are joined in the graph lie in two common blocks (since the graph has no triangles), and two non-adjacent points in different rows and columns lie in one common block, since they have one common neighbour in the graph.
We can add twelve further blocks, the six rows and the six columns of the grid. In the resulting block design, two points are contained in either 1 or 2 common blocks, with six-sevenths of pairs in one common block.
An affine plane of order n is a block design with n^{2} points, and n(n+1) blocks, each containing n points, so that each pair of points is contained in a unique block. Examples are known for all prime powers n, and for no other values. In particular, it is known that there is no affine plane of order n.
However, if we take the design with 36 points and 42 blocks, consisting of the 36 closed neighbourhoods in the Sylvester graph and the six columns of the array, we have the right number of points and blocks and the right cardinalities of blocks; 5/7 of the point pairs are contained in unique blocks, and the other 2/7 in either 0 or 2 blocks (with equally many of each).
This design admits the symmetric group S_{6}, fixing the rows and columns of the grid setwise.
Is this any use? Of course, not if you require an affine plane. However, there are various statistical measures of “optimality” of block designs, which allow comparisons among (real or hypothetical) designs with given numbers of points, blocks, and points per block. See here for explanation of these concepts. In particular, the value of the A-optimality criterion (which measures the average variance of estimators of differences between pairs of treatments – smaller variance = better design = larger value of the A-criterion) for the nonexistent affine plane would be 0.857, whereas the design just constructed realises the value 0.851, and so is scarcely inferior.
The design with 48 blocks is resolvable, that is, the blocks can be partitioned into 8 sets of size 6, each of which partitions the set of points. (Take the rows and the columns as two of these sets. Then choose to use either rows or columns, it makes no difference. Using rows, the six blocks of the Sylvester design consisting of closed neighbourhoods of points in a given row form a resolution class, and these six classes use all blocks of the design.)
So other designs can be built by deleting some of the resolution classes. Presumably, some of these will be good designs as well.
Time permitting, Rosemary and I will work out some of the details for these designs in the near future.
Here is a brief summary of what it is about. Synchronization comes in several flavours, and the point of the paper is to extend this a little further.
There is a closely related condition, which has no simple interpretation in the context of automata or semigroups (as far as I know). It depends on the following observations:
The second result is due to Delsarte, in his celebrated PhD thesis from Louvain in which he developed important techniques for studying the size of cliques and independent sets in graphs in association schemes, and applied the results to obtain bounds in coding theory and design theory.
Thus we call a transitive permutation group non-separating if there are subsets A and B of the domain, each larger than 1, with the product of their sizes being the degree and any translate of A meeting B in a single point; G is separating otherwise. It can be shown that a transitive permutation group is non-separating if and only if there is a non-trivial G-invariant graph in which the product of the clique number and the independence number is equal to the number of vertices.
Similarly, an association scheme is non-separating if some non-trivial graph in the scheme attains Delsarte’s bound (that is, clique number times independence number equals number of vertices), and separating if not.
In either case, separating implies synchronizing.
The main question we address in the paper is:
Question: Let G be the symmetric group of degree n in its action on the set of k-subsets of {1,…,n}. For which values of n and k is it the case that G is synchronizing? separating?
This is equivalent to asking, for which n and k is the Johnson association scheme J(n,k) synchronizing or separating. Note that, replacing k-sets by their complements if necessary we may assume that n ≥ 2k. In the case n = 2k, the group or scheme is imprimitive: “equal or disjoint” is a non-trivial equivalence relation. So we can assume that n ≥ 2k+1.
Now a small detour. Assume that 0 < t < k < n. A Steiner system S(t,k,n) consists of a collection of k-subsets of an n-set with the property that any t-subset is contained in a unique subset in the collection. The blocks of a Steiner system form a clique in the graph (in the Johnson scheme) in which two sets are joined if they intersect in fewer than t points. Now we can always find an independent set in this graph, by considering all the k-sets containing a fixed t-set. (This is said to be of EKR type: the famous Erdős–Ko–Rado theorem says that, if n is sufficiently large in terms of k, this is the largest coclique in this graph.) Now it is not a difficult exercise to show that the product of the number of blocks in a Steiner system and the size of the EKR coclique is equal to the total number of k-sets. So, if a Steiner system exists, then the Johnson scheme is non-separating.
Our main conjecture is that the converse is true asymptotically:
Conjecture: There is a function F such that, if n ≥ F(k), then the Johnson scheme J(n,k) is non-separating if and only if there exists a Steiner system S(t,k,n) for some t with 0 < t < k.
This conjecture synchronizes well with the great result of Peter Keevash, discussed here. There are divisibility conditions which are easily shown to be necessary for the existence of a Steiner system; Keevash showed that they are asymptotically sufficient, in the sense that, for large enough n, if the divisibility conditions are satisfied, then the Steiner system exists. So, in our main conjecture, we can replace “there exists a Steiner system” by the apparently much weaker condition “the divisibility conditions for a Steiner system are satisfied”.
What about synchronization? A large set of Steiner systems is a partition of the set of all k-sets into Steiner systems. If a large set of Steiner systems exists, then the Johnson scheme is not synchronizing. Again, our conjecture is that the converse holds asymptotically; for large enough n (in terms of k), synchronization can only fail because of the existence of a large set of Steiner systems.
For small n, other interesting things can happen. Consider, for example, the case n = 7, k = 3. The lines of the Fano plane form a set of seven 3-sets, any two meeting in one point, that is, a clique in the graph where two sets meet if they intersect in one point. We get a colouring of this graph with 7 colours as follows. For each line of the Fano plane, the five sets consisting of this line and the four 3-sets disjoint from it form an independent set, and this gives a partition into seven independent sets of size 5, the required colouring. So J(7,3) is non-synchronizing.
My book Notes on Counting: An Introduction to Enumerative Combinatorics should be published by Cambridge University Press in June this year, as part of the Australian Mathematical Society Lecture Series.
If you have read some of my lecture notes on enumerative combinatorics, you may want to know that this book contains more than just the union of all the notes I have written on this subject!
If you are interested, there is a flyer, and preorder form, here.
But no job is ever finished. In the book, at the start of Chapter 4, I prove (as an introductory example for recurrence relations) that the number of compositions of n (ordered sequences of positive integers with sum n) is 2^{n−1} if n ≥ 1. The easy proof involves showing that the number of compositions of n is twice the number of compositions of n−1. When I came to mark my St Andrews exam, I found that two students had produced a beautifully short and elegant proof of this, which may not be new but was certainly new to me!
It goes like this. Imagine you are a typesetter; you set up a row of 2n−1 boxes, each to contain a symbol. In the odd numbered boxes you put a 1; in the even numbered boxes, you choose to put a plus sign or a comma. Your 2^{n−1} choices give all the compositions of n, each once.
For example, the compositions of 3 are
1+1+1 1+1,1 1,1+1 1,1,1
What follows are random musings on what we heard in a very enjoyable two days.
Three speakers mentioned Galton–Watson branching processes in similar contexts: Andrew Treglown, Oliver Riordan and Guillem Perarnau. I got two insights from this, which I had probably not realised clearly before. First, Erdős–Rényi random graph is the same thing as percolation on the complete graph. (For percolation on a graph, we allow edges to be open independently with probability p and ask questions about whether the resulting graph has a “giant component” and similar things. Erdős and Rényi simply included edges with probability p from all 2-subsets of the vertex set, and asked similar questions.)
Secondly, for percolation in any regular graph of valency d, to explore the component containing v you look at the neighbours of v, their neighbours, and so on until you have seen everything. The number of neighbours of any vertex (apart from the vertex on the path from v by which we reached it) is a binomial random variable Bin(d−1,p), and the whole process is “stochastically dominated” by the Galton–Watson process with this rule for descendents. (In an infinite tree, they would be identical; but the possibility of short cycles means that we will not see more vertices in the percolation case than in the Galton–Watson case.)
The three speakers tackled different problems. Perarnau was looking through the critical window in the percolation problem for regular graphs. Treglown was examining resilience of random regular graphs, e.g. how many edges can you delete before losing the typical value of a Ramsey number in the graph. (As he said, there are two processes going on here: first choose a random graph, then delete an arbitrary subset of its edges.) Riordan was considering the Achlioptas process, a variant on Erdős–Rényi where, at each step, you choose two random edges, and keep one, the choice depending on the size of the components they connect.
Kitty Meeks talked about complexity problems for sum-free sets. How do such problems arise? She began with a theorem of Erdős, according to which any set of n natural numbers contains a sum-free subset of size at least n/3. Now one can ask whether there is a sum-free subset of size n/3+k, or to count such subsets. This kind of problem lends itself very well to analysis via parameterised complexity.
Sophie Huczynska talked about the “homomorphic image” order on relational structures such as graphs: in this order, G is smaller than H if there is a homomorphism from G to H mapping the edge set of G onto the edge set of H. (She began by describing a variety of graph orders, such as subgraph order, induced subgraph order, and minor order, and showing how all can be expressed in terms of homomorphisms.) Typical questions are whether a class of graphs or other structures is a partial well-order with respect to the homomorphic image order (that is, whether it contains no infinite antichains), and describing the antichains if possible.
A special mention for Ewan Davies, who stepped in at very short notice when Agelos Georgakopoulos was unable to speak. (We found out the reason later; his wife had produced their first son that very day.) Ewan gave a fascinating talk on getting accurate values or estimates for the coefficients of the matching polynomial of a graph (the polynomial in which the coefficient of x^{k} is the number of matchings with k edges in the graph), using ideas from statistical mechanics. (The matching polynomial is the partition function for the monomer-dimer model on the graph.) There was even some differentation! A very assured and confident talk which raised lots of questions from the audience.
I am afraid I didn’t feel up to having drinks afterwards, so just sneaked off home.
I don’t intend to give a blow-by-blow account of all this.
The big theorem says that there is an algorithm which tests isomorphism of two n-vertex graphs in time bounded by exp(O(log n)^{c}) for some constant c. As the name might suggest, the bound is polynomial if the constant c is equal to 1. The best previous result for graph isomorphism was exp(O(n^{c})), that is, fractional exponential.
The strategy is “divide and conquer”, and as Laci explained, his job was to divide, since the conquering had already been achieved by Luks. The dividing is rather involved, and I won’t attempt to describe it. Any graph isomorphism algorithm has to name an object in the graph; this incurs a multiplicative cost equal to the number of objects which could have been named.
The basic division step is called “split or Johnson”. It identifies, at a cost which can be controlled, either a small subset of the vertex set, or a partition of the vertex set, or a Johnson graph (whose vertices are the k-element subsets of an auxiliary m-element set). But for the induction, we have to have the structure of a graph on the auxiliary set, and further complicated reductions were needed to achieve this.
In the second tutorial, he described carefully the mistake that Harald Helfgott had found in the proof, and how he had fixed it. The arguments seemed fairly familiar to me; there are similar things in my Oxford DPhil thesis in 1971. This was the time when the graph-theoretic methods introduced into permutation group theory by Charles Sims and Donald Higman were beginning to make their mark on permutation group theory.
Nothing is ever lost!
Nor is it about important factors, highlighted in the 2011 International Review of UK Mathematics although largely ignored by the research council that commissioned it: research area, size of research group, and size of institution.
But I discovered recently a diversity in the way we do mathematics, which I found surprising and potentially significant.
A few weeks ago, I was at dinner with a visiting colloquium speaker. The conversation turned to whether mathematical thought is done in words, or is “pre-linguistic”.
This is a topic about which Jacques Hadamard, in his book originally called The Psychology of Invention in the Mathematical Field but re-published as The Mathematician’s Mind, had a lot to say. Some linguists and linguistic philosophers, notably Max Müller, insist that language is essential to thought, and that no thoughts can be pre-linguistic. Hadamard, from his own intuition and from the writings of others from Poincaré to Einstein, is convinved that this is not the case, and is bewildered that Müller can hold this view with such vehemence. In a footnote, he says,
I have also seen the following topic (a deplorable subject, as far as I can judge) proposed for an examination—an elementary one, the “baccalauréat”—in philosophy in Paris: “To show that language is as necessary for us to think as it is to communicate our thoughts.”
For me, I know for sure that my best insights (those which are not just routine calculations) are pre-linguistic, and I struggle to put them into words: similarly, if the insight is a conjecture, I struggle to see how the conjecture might be proved. I assumed that most mathematicians would be like me, and would agree with Hadamard rather than Müller.
So it was a bit of a surprise when, of the five research mathematicians at the table, we were split 3 to 2 in Hadamard’s favour.
This is of course an anecdote, and not survey data. But we noticed a curious thing. The two who said they did mathematics in words had something probably significant in common: their cradle tongue (in both cases, a Slavic language) was not the language in which they do mathematics (in both cases, English); and both of them had learnt English at a comparatively advanced age. The other three of us were all native English speakers.
Not sure what to make of this. But I am glad that it drove me back to Hadamard’s book. I had completely forgotten that, at a certain point, he admits to his failure to be able to think creatively about group theory!